Skip to content
Merged
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
34 commits
Select commit Hold shift + click to select a range
f7be5e6
Uses image cards for the frameworks
samejr Feb 10, 2025
a002e78
Removes old snippets
samejr Feb 10, 2025
57a1ecf
New AI agents side menu section
samejr Feb 10, 2025
3925c62
WIP adding new ai agent pages
samejr Feb 10, 2025
a4e8f43
Better overview page
samejr Feb 10, 2025
a20e082
More copy added to the agent example pages
samejr Feb 11, 2025
134f6a2
Copy improvements
samejr Feb 11, 2025
2b8654e
Removes “Creating a project” page and side menu section
samejr Feb 11, 2025
555b70a
Fixes broken links
samejr Feb 11, 2025
b8a44e5
Updates to the latest Mintlify version, fixes issues, changes theme
samejr Feb 11, 2025
584c826
Adds descriptions to the main dropdown menu items
samejr Feb 11, 2025
8133b81
Reformatted Introduction docs ‘landing page’
samejr Feb 12, 2025
a13d7ed
Retry heartbeat timeouts by putting back in the queue (#1689)
matt-aitken Feb 10, 2025
f0029b8
OOM retrying on larger machines (#1691)
matt-aitken Feb 10, 2025
39b4a4c
Kubernetes OOMs appear as non-zero sigkills, adding support for treat…
matt-aitken Feb 11, 2025
535cae9
Complete the original attempt span if retrying due to an OOM
matt-aitken Feb 11, 2025
dd651ab
Revert "Complete the original attempt span if retrying due to an OOM"
matt-aitken Feb 11, 2025
e375d81
chore: Update version for release (#1666)
github-actions[bot] Feb 11, 2025
0bcf18b
Release 3.3.14
matt-aitken Feb 11, 2025
23095ba
Set machine when triggering docs
matt-aitken Feb 11, 2025
7ca39d8
Batch queue runs that are waiting for deploy (#1693)
matt-aitken Feb 11, 2025
8a24c03
Detect ffmpeg OOM errors, added manual OutOfMemoryError (#1694)
matt-aitken Feb 12, 2025
90de1c8
Improved the machines docs, including the new OutOfMemoryError
matt-aitken Feb 12, 2025
4b50354
chore: Update version for release (#1695)
github-actions[bot] Feb 12, 2025
31d8941
Release 3.3.15
matt-aitken Feb 12, 2025
2c02c8b
Create new partitioned TaskEvent table, and switch to it gradually as…
ericallam Feb 12, 2025
ed972ac
Don't create an attempt if the run is final, batchTriggerAndWait bad …
matt-aitken Feb 12, 2025
3bc5ead
Fix missing logs on child runs by using the root task run createdAt i…
ericallam Feb 12, 2025
37db88b
Provider changes to support image cache (#1700)
nicktrn Feb 12, 2025
d88f5bc
Fix run container exits after OOM retries (#1701)
nicktrn Feb 12, 2025
baa5ead
Upgrade local dev to use electric beta.15 (#1699)
ericallam Feb 13, 2025
3f6b934
Text fixes
samejr Feb 13, 2025
a32be10
Merge remote-tracking branch 'origin/main' into agent-docs-examples
samejr Feb 13, 2025
5b41766
Removed pnpm files
samejr Feb 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Copy improvements
  • Loading branch information
samejr committed Feb 11, 2025
commit 134f6a292faafd5c7cb6d51efec311458f70a758
2 changes: 1 addition & 1 deletion docs/guides/ai-agents/generate-translate-copy.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,7 @@ In this example, we'll create a workflow that generates and translates copy. Thi
**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
- Uses `experimental_telemetry` to provide LLM logs
- Generates marketing copy based on subject and target word count
- Validates the generated copy meets word count requirements (±10 words)
- Translates the validated copy to the target language while preserving tone
Expand Down
10 changes: 5 additions & 5 deletions docs/guides/ai-agents/respond-and-check-content.mdx
Original file line number Diff line number Diff line change
@@ -1,12 +1,12 @@
---
title: "Respond to customer inquiry and check for inappropriate content"
sidebarTitle: "Respond & check content"
description: "Create a AI agent workflow that responds to customer inquiries while checking if their text is inappropriate"
description: "Create an AI agent workflow that responds to customer inquiries while checking if their text is inappropriate"
---

## Overview

Parallelization is a workflow pattern where multiple tasks or processes run simultaneously instead of sequentially, allowing for more efficient use of resources and faster overall execution. It's particularly valuable when different parts of a task can be handled independently, such as running content analysis and response generation at the same time.
**Parallelization** is a workflow pattern where multiple tasks or processes run simultaneously instead of sequentially, allowing for more efficient use of resources and faster overall execution. It's particularly valuable when different parts of a task can be handled independently, such as running content analysis and response generation at the same time.
![Parallelization](/guides/ai-agents/parallelization.png)

## Example task
Expand All @@ -16,8 +16,8 @@ In this example, we'll create a workflow that simultaneously checks content for
**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
- Uses `batch.triggerByTaskAndWait` to run customer response and content moderation tasks in parallel
- Uses `experimental_telemetry` to provide LLM logs
- Uses [`batch.triggerByTaskAndWait`](/triggering#batch-triggerbytaskandwait) to run customer response and content moderation tasks in parallel
- Generates customer service responses using an AI model
- Simultaneously checks for inappropriate content while generating responses

Expand Down Expand Up @@ -115,7 +115,7 @@ export const handleCustomerQuestion = task({

## Run a test

On the Test page in the dashboard, select the `respond-and-check-content` task and include a payload like the following:
On the Test page in the dashboard, select the `handle-customer-question` task and include a payload like the following:

``` json
{
Expand Down
7 changes: 4 additions & 3 deletions docs/guides/ai-agents/route-question.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: "Create an AI agent workflow that routes a question to a different

## Overview

Routing is a workflow pattern that classifies an input and directs it to a specialized followup task. This pattern allows for separation of concerns and building more specialized prompts, which is particularly effective when there are distinct categories that are better handled separately. Without routing, optimizing for one kind of input can hurt performance on other inputs.
**Routing** is a workflow pattern that classifies an input and directs it to a specialized followup task. This pattern allows for separation of concerns and building more specialized prompts, which is particularly effective when there are distinct categories that are better handled separately. Without routing, optimizing for one kind of input can hurt performance on other inputs.

![Routing](/guides/ai-agents/routing.png)

Expand All @@ -18,6 +18,9 @@ In this example, we'll create a workflow that routes a question to a different A

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
- Routes questions using a lightweight model (`o1-mini`) to classify complexity
- Directs simple questions to `gpt-4o` and complex ones to `gpt-o3-mini`
- Returns both the answer and metadata about the routing decision

```typescript
import { openai } from "@ai-sdk/openai";
Expand Down Expand Up @@ -102,8 +105,6 @@ Triggering our task with a simple question shows it routing to the gpt-4o model
}
```

This example payload routes the question to a different AI model depending on its complexity.

<video
src="https://content.trigger.dev/agent-routing.mp4"
controls
Expand Down
6 changes: 3 additions & 3 deletions docs/guides/ai-agents/translate-and-refine.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -6,13 +6,13 @@ description: "This guide will show you how to create a task that translates text

## Overview

This example is based on an "evaluator-optimizer" pattern, where one LLM generates a response while another provides evaluation and feedback in a loop. This is particularly effective for tasks with clear evaluation criteria where iterative refinement provides measurable value.
This example is based on the **evaluator-optimizer** pattern, where one LLM generates a response while another provides evaluation and feedback in a loop. This is particularly effective for tasks with clear evaluation criteria where iterative refinement provides better results.

![Evaluator-optimizer](/guides/ai-agents/evaluator-optimizer.png)

## Example task

Our example task is designed to translate text into a target language and refine the translation over a number of iterations based on feedback provided by the LLM.
This example task translates text into a target language and refines the translation over a number of iterations based on feedback provided by the LLM.

**This task:**

Expand Down Expand Up @@ -159,7 +159,7 @@ On the Test page in the dashboard, select the `translate-and-refine` task and in
}
```

This example payload translates the text into French and should be suitably difficult to require a few iterations, depending on the model used.
This example payload translates the text into French and should be suitably difficult to require a few iterations, depending on the model used and the prompt criteria you set.

<video
src="https://content.trigger.dev/agent-evaluator-optimizer.mp4"
Expand Down
16 changes: 8 additions & 8 deletions docs/guides/ai-agents/verify-news-article.mdx
Original file line number Diff line number Diff line change
@@ -1,27 +1,27 @@
---
title: "Verify a news article"
sidebarTitle: "Verify news article"
description: "Create a AI agent workflow that verifies the facts in a news article"
description: "Create an AI agent workflow that verifies the facts in a news article"
---

## Overview

This example demonstrates the "orchestrator-workers" pattern, where a central AI agent dynamically breaks down complex tasks and delegates them to specialized worker agents. This pattern is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.
This example demonstrates the **orchestrator-workers** pattern, where a central AI agent dynamically breaks down complex tasks and delegates them to specialized worker agents. This pattern is particularly effective when tasks require multiple perspectives or parallel processing streams, with the orchestrator synthesizing the results into a cohesive output.

![Orchestrator](/guides/ai-agents/orchestrator-workers.png)

## Example task

Our example task uses multiple AI agents to extract claims from a news article and verify them in parallel, combining source verification and historical analysis to produce a comprehensive fact-checking report.
Our example task uses multiple LLM calls to extract claims from a news article and analyze them in parallel, combining source verification and historical context to assess their credibility.

**This task:**

- Uses `generateText` from [Vercel's AI SDK](https://sdk.vercel.ai/docs/introduction) to interact with OpenAI models
- Uses `experimental_telemetry` in the source verification and historical analysis tasks to provide LLM logs
- Uses `batch.triggerByTaskAndWait` to orchestrate parallel processing of claims
- Extracts factual claims from news articles using an AI model
- Verifies claims against recent sources and analyzes historical context in parallel
- Combines results into a comprehensive fact-checking report
- Uses `experimental_telemetry` to provide LLM logs
- Uses [`batch.triggerByTaskAndWait`](/triggering#batch-triggerbytaskandwait) to orchestrate parallel processing of claims
- Extracts factual claims from news articles using the `o1-mini` model
- Evaluates claims against recent sources and analyzes historical context in parallel
- Combines results into a structured analysis report


```typescript
Expand Down