⏱️ Estimated Time: 35-50 minutes
In this workshop, you'll create a new project using the AI Web Chat template in Visual Studio. You'll configure GitHub Models as the AI service provider, set up the connection string, and run and explore the application.
Create a new project using the AI Web Chat template as follows:
-
Open Visual Studio 2022
-
Click "Create a new project"
-
Search for and select "AI Chat Web App" template
-
Click "Next"
-
Configure your project:
- Enter "GenAiLab" as the project name
- Choose a location for your project
- Make sure "Place solution and project in same directory" is checked
- Click "Next"
-
Configure AI options:
- Select "GitHub Models" for AI service provider
- Select "Qdrant" for Vector store
- Check the box for "Use keyless authentication for Azure services"
- Check the box for "Use Aspire orchestration"
- Click "Create"
Alternative: Ollama Option: If you're using the Ollama development container (see Development Container Options), you can select "Ollama" as the AI service provider instead of "GitHub Models". This allows you to work with local AI models without requiring a GitHub account or internet connection.
-
Wait for Visual Studio to create the project and restore packages. When you see the Sign in popup, just close it.
If you prefer to use the command line, you can create the same project using the .NET CLI:
-
First, ensure you have the AI Chat Web App template installed:
dotnet new install Microsoft.Extensions.AI.Templates
-
Navigate to the directory where you want to create your project:
cd "C:\your\desired\path" -
Create the project using the
dotnet newcommand with the appropriate parameters:dotnet new aichatweb --name GenAiLab --Framework net9.0 --provider githubmodels --vector-store qdrant --aspire true
This command creates a new AI Chat Web App with:
- Project name:
GenAiLab - Framework:
.NET 9.0 - AI service provider:
GitHub Models - Vector store:
Qdrant - .NET Aspire orchestration:
enabled
Alternative: Ollama Option: If you're using the Ollama development container, you can replace
--provider githubmodelswith--provider ollamato use local AI models instead. - Project name:
-
Navigate into the project directory:
cd GenAiLab
Note for automation: The
dotnet new aichatwebcommand creates a solution structure with multiple projects. If you need to move the generated files to a specific directory structure (like/src/start), you may need to reorganize the files after creation. -
Open the project in your preferred editor:
code . # For Visual Studio Code # or start GenAiLab.sln # For Visual Studio
Due to recent updates in .NET Aspire 9.4 and dependency changes in AI packages, you need to update all NuGet packages in the solution to their latest versions (including prerelease packages) before running the application:
-
In the Solution Explorer, right-click on the solution file (
GenAiLab.sln) and select "Manage NuGet Packages for Solution..." -
In the NuGet Package Manager, click on the "Updates" tab
-
Check the "Include prerelease" checkbox to include preview versions of AI packages
-
Click "Update All" to update all packages to their latest versions
-
Review and accept any license agreements that appear
-
Wait for all packages to be updated and restored
If you prefer to use the command line, you can update all packages using the dotnet outdated tool:
-
First, install the
dotnet outdatedglobal tool if you haven't already:dotnet tool install --global dotnet-outdated-tool
-
Navigate to the solution directory (if not already there):
cd GenAiLab
-
Update all packages in the solution, including prerelease versions:
dotnet outdated GenAiLab.sln --upgrade --pre-release Always
-
After the update completes, restore and build the solution to ensure everything is working:
dotnet restore dotnet build
For GitHub Models to work, you need to set up a connection string with a GitHub token:
Note: This step requires a GitHub account. If you don't have one yet, please follow the instructions in Part 1: Setup to create a GitHub account.
-
Create a GitHub token for accessing GitHub Models:
- Go to https://github.com/settings/personal-access-tokens/new
- Click "Generate new token" (fine-grained token)
- Enter a name for the token, such as "AI Models Access"
- Under Permissions, set Models to Access: Read-only
- Click "Generate token" at the bottom of the page
- Copy the generated token (you won't be able to see it again)
Note: For additional guidance on configuring GitHub Models access, see the Microsoft documentation quickstart.
-
In the Solution Explorer, right-click on the
GenAiLab.AppHostproject and select "Manage User Secrets" -
In the
secrets.jsonfile that opens, add the following connection string:{ "ConnectionStrings:openai": "Endpoint=https://models.inference.ai.azure.com;Key=YOUR-API-KEY" }Replace
YOUR-API-KEYwith the GitHub token you created in step 1. -
Save the
secrets.jsonfile.
Now let's run the application and explore its features:
-
Make sure that Docker Desktop is running. This is required to run containerized resources like Qdrant.
-
Make sure the
GenAiLab.AppHostproject is set as the startup project. -
Press F5 or click the "Start Debugging" button in Visual Studio.
Note: When running the application for the first time, Visual Studio may display a prompt asking you to trust the IIS Developer certificate. This prompt sometimes appears beneath the browser window. If the
aichatweb-appresource doesn't start, check for this certificate prompt and click "Yes" to accept it. The application won't run until you've accepted this certificate. -
The .NET Aspire dashboard will open in your browser first, displaying all the services in your application.
-
Shortly after, the web application will launch in another browser tab.
If you run into issues running the Qdrant container, stop debugging and start it again.
Explore the .NET Aspire dashboard to understand the architecture of your application:
-
You'll see several services running:
aichatweb-app: The main web applicationvectordb: The Qdrant vector database service
-
Click on each service to see more details:
- Explore the endpoints tab to see service URLs
- Check the logs tab to monitor service activity
- View the environment variables to understand service configuration
-
Notice how .NET Aspire orchestrates all these services together, making it easy to develop distributed applications.
Let's test the AI functionality of the application:
-
Launch the aiwebchat-app by clicking on the hyperlinked URL listed in the Endpoints column in the .NET Aspire dashboard. You should see the web app launch in a seprate tab with a chat interface.
-
Type a message like "What PDF documents do you have information about?" and press Enter.
-
The AI will respond with information about the PDF documents that have been ingested.
-
Ask another question like "Tell me about survival kits" and observe how the AI uses information from the ingested documents to provide a response.
-
Notice how the chat history is maintained and displayed on the left sidebar.
- How to create a new project using the AI Web Chat template in Visual Studio
- How to update NuGet packages in a solution to get the latest AI and Aspire components
- How to configure GitHub Models as the AI service provider
- How to set up the connection string for AI services
- How to use .NET Aspire to orchestrate multiple services
- How to interact with an AI-powered chat application
Problem: Application doesn't start, appears to hang during launch.
Solution: Look for the IIS Developer certificate trust prompt (may be hidden behind the browser). Click "Yes" to accept the certificate.
Alternative Solution: If the certificate dialog has been dismissed or doesn't show, you can manually trust the development certificate using the .NET CLI:
dotnet dev-certs https --trustThis command will regenerate and trust the ASP.NET Core HTTPS development certificate. For more information, see Trust the ASP.NET Core HTTPS development certificate.
Problem: Build fails with static asset conflicts or package restore issues.
Solution:
dotnet clean
dotnet restore
dotnet buildProblem: Authentication errors or "unauthorized" messages when testing chat.
Solution:
- Verify your GitHub token has the correct permissions
- Check that the token is correctly placed in
secrets.json - Ensure the connection string format is correct:
"Endpoint=https://models.inference.ai.azure.com;Key=YOUR_TOKEN"
Problem: Can't find the AI Web Chat template in Visual Studio.
Solution:
- Verify the template is installed:
dotnet new list | Select-String aichatweb - If not found, install it:
dotnet new install Microsoft.Extensions.AI.Templates - Restart Visual Studio after template installation
Excellent work! Your AI application is running successfully. Time to dive deeper into the code!
Continue to → Part 3: Explore the Template Code
In Part 3, you'll learn how to:
- 🏗️ Understand the application architecture and structure
- 🔍 Explore the AI integration patterns
- 📊 Learn about vector database usage
- 🧩 Discover how components work together






