This file provides comprehensive guidance for AI coding agents working with the Sentry Cocoa SDK repository.
- Continuous Learning: Whenever an agent performs a task and discovers new patterns, conventions, or best practices that aren't documented here, it should add these learnings to AGENTS.md. This ensures the documentation stays current and helps future agents work more effectively.
- Context Management: When using compaction (which reduces context by summarizing older messages), the agent must re-read AGENTS.md afterwards to ensure it's always fully available in context. This guarantees that all guidelines, conventions, and best practices remain accessible throughout the entire session.
- Before forming a commit, ensure compilation succeeds for all platforms: iOS, macOS, tvOS, watchOS and visionOS. This should hold for:
- the SDK framework targets
- the sample apps
- the test targets for the SDK framework and sample apps
- Before submitting a branch for a PR, ensure there are no new issues being introduced for:
- static analysis
- runtime analysis, using thread, address and undefined behavior sanitizers
- cross platform dependencies:
- React Native
- Flutter
- .Net
- Unity
- While preparing changes, ensure that relevant documentation is added/updated in:
- headerdocs and inline comments
- readmes and maintainer markdown docs
- our docs repo and web app onboarding
- our cli and integration wizard
- Find the CI plan in the .github/workflows folder.
- Run unit tests:
make run-test-server && make test - Run important UI tests:
make test-ui-critical - Fix any test or type errors until the whole suite is green.
- Add or update tests for the code you change, even if nobody asked.
When testing error handling code paths, follow these guidelines:
Testable Error Paths:
Many system call errors can be reliably tested:
- File operation failures: Use invalid/non-existent paths, closed file descriptors, or permission-restricted paths
- Directory operation failures: Use invalid directory paths
- Network operation failures: Use invalid addresses or closed sockets
Example test pattern:
- (void)testFunction_HandlesOperationFailure
{
// -- Arrange --
// This test verifies that functionName handles errors correctly when operation() fails.
//
// The error handling code path exists in SourceFile.c and correctly handles
// the error condition. The code change itself is correct and verified through code review.
// Setup to trigger error (e.g., invalid path, closed fd, etc.)
// -- Act --
bool result = functionName(/* parameters that will cause error */);
// -- Assert --
// Verify the function fails gracefully (error handling path executes)
// This verifies that the error handling code path executes correctly.
XCTAssertFalse(result, @"functionName should fail with error condition");
}Untestable Error Paths:
Some error paths cannot be reliably tested in a test environment:
- System calls with hardcoded valid parameters: Cannot pass invalid parameters to trigger failures
- Resource exhaustion scenarios: System limits may not be enforceable in test environments
- Function interposition limitations:
DYLD_INTERPOSEonly works for dynamically linked symbols; statically linked system calls cannot be reliably mocked
Documenting Untestable Error Paths:
When an error path cannot be reliably tested:
- Remove the test if one was attempted but couldn't be made to work
- Add documentation in the test file explaining:
- Why there's no test for the error path
- Approaches that were tried and why they failed
- That the error handling code path exists and is correct (verified through code review)
- Add a comment in the source code at the error handling location explaining why it cannot be tested
- Update PR description to document untestable error paths in the "How did you test it?" section
Test Comment Best Practices:
- Avoid line numbers in test comments - they become outdated when code changes
- Reference function names and file names instead of line numbers
- Document the error condition being tested (e.g., "when open() fails")
- Explain verification approach - verify that the error handling path executes correctly rather than capturing implementation details
- Pre-commit Hooks: This repository uses pre-commit hooks. If a commit fails because files were changed during the commit process (e.g., by formatting hooks), automatically retry the commit. Pre-commit hooks may modify files (like formatting), and the commit should be retried with the updated files.
This project uses Conventional Commits 1.0.0 for all commit messages.
Commit Message Structure:
<type>[optional scope]: <description>
[optional body]
[optional footer(s)]
Required Types:
feat:- A new feature (correlates with MINOR in SemVer)fix:- A bug fix (correlates with PATCH in SemVer)
Other Allowed Types:
build:- Changes to build system or dependencieschore:- Routine tasks, maintenanceci:- Changes to CI configurationdocs:- Documentation changesstyle:- Code style changes (formatting, missing semi-colons, etc.)refactor:- Code refactoring without changing functionalityperf:- Performance improvementstest:- Adding or updating tests
Breaking Changes:
- Add
!after type/scope:feat!:orfeat(api)!: - Or use footer:
BREAKING CHANGE: description
Examples:
feat: add new session replay feature
fix: resolve memory leak in session storage
docs: update installation guide
refactor: simplify event serialization
feat!: change API response format
BREAKING CHANGE: API now returns JSON instead of XML
NEVER mention AI assistant names (like Claude, ChatGPT, Cursor, etc.) in commit messages or PR descriptions.
Keep commit messages focused on the technical changes made and their purpose.
What to avoid:
- ❌ "Add feature X with Claude's help"
- ❌ "Co-Authored-By: Claude noreply@anthropic.com"
- ❌ "Co-Authored-By: Cursor noreply@cursor.com"
- ❌ "Generated with Claude Code"
- ❌ "Generated by Cursor"
- ❌ "🤖 Generated with Claude Code"
Good examples:
- ✅ "feat: add user authentication system"
- ✅ "fix: resolve connection pool exhaustion"
- ✅ "refactor: simplify error handling logic"
- format code:
make format - run static analysis:
make analyze - run unit tests:
make run-test-server && make test - run important UI tests:
make test-ui-critical - build the XCFramework deliverables:
make build-xcframework - lint pod deliverable:
make pod-lint
- Main Documentation: docs.sentry.io/platforms/apple
- Docs Repo: sentry-docs
- SDK Developer Documentation: develop.sentry.dev/sdk/
- README: @README.md
- Contributing: @CONTRIBUTING.md
- Developer README: @develop-docs/README.md
- Sample App collection README: @Samples/README.md
- sentry-cli: uploading dSYMs for symbolicating stack traces gathered via the SDK
- sentry-wizard: automatically injecting SDK initialization code
- sentry-cocoa onboarding: the web app's onboarding instructions for
sentry-cocoa - sentry-unity: the Sentry Unity SDK, which depends on sentry-cocoa
- sentry-dart: the Sentry Dart SDK, which depends on sentry-cocoa
- sentry-react-native: the Sentry React Native SDK, which depends on sentry-cocoa
- sentry-dotnet: the Sentry .NET SDK, which depends on sentry-cocoa
Use concise, action-oriented names that describe the workflow's primary purpose:
Format: [Action] [Subject]
Examples:
- ✅
Release(not "Release a new version") - ✅
UI Tests(not "Sentry Cocoa UI Tests") - ✅
Benchmarking(not "Run benchmarking tests") - ✅
Lint SwiftLint(not "Lint Swiftlint Formatting") - ✅
Test CocoaPods(not "CocoaPods Integration Test")
Use clear, concise descriptions that avoid redundancy with the workflow name:
Principles:
- Remove redundant prefixes - Don't repeat the workflow name
- Use action verbs - Start with what the job does
- Avoid version-specific naming - Don't include Xcode versions, tool versions, etc.
- Keep it concise - Maximum 3-4 words when possible
Patterns:
- ✅
Build XCFramework Slice(not "Build XCFramework Variant Slice") - ✅
Assemble XCFramework Variant(not "Assemble XCFramework" - be specific about variants) - ✅
Build App and Test Runner - ✅
${{matrix.sdk}}for platform-specific builds (e.g., "iphoneos", "macosx") - ✅
${{inputs.name}}${{inputs.suffix}}for variant assembly (e.g., "Sentry-Dynamic")
- ✅
Test ${{matrix.name}} V3 # Up the version with every change to keep track of flaky tests - ✅
Unit ${{matrix.name}}(for unit test matrices) - ✅
Run Benchmarks ${{matrix.suite}}(for benchmarking matrices) - ✅
Test SwiftUI V4 # Up the version with every change to keep track of flaky tests - ✅
Test Sentry Duplication V4 # Up the version with every change to keep track of flaky tests
Note:
- Version numbers (V1, V2, etc.) are included in test job names for flaky test tracking, with explanatory comments retained.
- For matrix-based jobs, use clean variable names that produce readable job names (e.g.,
${{matrix.sdk}},${{matrix.name}},${{inputs.name}}${{inputs.suffix}}). - When matrix includes multiple iOS versions, add a descriptive
namefield to each matrix entry (e.g., "iOS 16 Swift", "iOS 17 Swift") for clear job identification.
- ✅
Validate XCFramework(not "Validate XCFramework - Static") - ✅
Validate SPM Static(not "Validate Swift Package Manager - Static") - ✅
Check API Stability(not "API Stability Check")
- ✅
Lint(job name when workflow already specifies the tool, e.g., "Lint SwiftLint") - ❌
SwiftLint(redundant with workflow name "Lint SwiftLint") - ❌
Clang Format(redundant with workflow name "Lint Clang")
- ✅
Collect App Metrics(not "Collect app metrics") - ✅
Detect File Changes(not "Detect Changed Files") - ✅
Release New Version(not "Release a new version")
For UI test jobs that need version tracking for flaky test management, include the version number in BOTH the job name AND a comment:
Format: [Job Name] V{number} # Up the version with every change to keep track of flaky tests
Example:
name: Test iOS Swift V5 # Up the version with every change to keep track of flaky testsRationale:
- Version numbers must be in the job name because failure rate monitoring captures job names and ignores comments
- Comments are kept to provide context and instructions for developers
When using matrix variables, prefer descriptive names over technical details:
Examples:
- ✅
Test ${{matrix.name}}where name = "iOS Objective-C", "tvOS Swift" - ✅
Test ${{matrix.name}}where name = "iOS 16 Swift", "iOS 17 Swift", "iOS 18 Swift" - ✅
Unit ${{matrix.name}}where name = "iOS 16 Sentry", "macOS 15 Sentry", "tvOS 18 Sentry" - ✅
Run Benchmarks ${{matrix.suite}}where suite = "High-end device", "Low-end device" - ✅
Check API Stability (${{ matrix.version }})where version = "default", "v9" - ❌
Test iOS Swift Xcode ${{matrix.xcode}}(version-specific)
For reusable workflows (workflow_call), use descriptive names that indicate their purpose:
Examples:
- ✅
Build XCFramework Slice - ✅
Assemble XCFramework Variant - ✅
UI Tests Common
- Status Check Stability - Names won't break when tool versions change
- Cleaner GitHub UI - Shorter, more readable names in PR checks
- Better Organization - Consistent patterns make workflows easier to understand
- Future-Proof - Version-agnostic naming reduces maintenance overhead
- Branch Protection Compatibility - Stable names work well with GitHub's branch protection rules
❌ Don't include:
- Tool versions (Xcode 15.4, Swift 5.9, etc.) unless they are relevant to the job
- Redundant workflow prefixes ("Release /", "UI Tests /")
- Overly verbose descriptions
- Technical implementation details in user-facing names
- Lowercase inconsistency
❌ Examples of what NOT to do:
- "Release / Build XCFramework Variant Slice (Sentry, mh_dylib, -Dynamic, sentry-dynamic) / Build XCFramework Slice"
- "UI Tests / UI Tests for iOS-Swift Xcode 15.4 - V5"
- "Lint Swiftlint Formatting / SwiftLint" (redundant job name)
- "Build Sentry Cocoa XCFramework Variant Slice"
This document outlines the concurrency configuration strategy for all GitHub Actions workflows in the Sentry Cocoa repository. The strategy optimizes CI resource usage while ensuring critical runs (like main branch pushes) are never interrupted.
- Cancel outdated PR runs - When new commits are pushed to a PR, cancel the previous workflow run since only the latest commit matters for merge decisions
- Protect critical runs - Never cancel workflows running on main branch, release branches, or scheduled runs as these are essential for maintaining baseline quality and release integrity
- Per-branch grouping - Use
github.reffor consistent concurrency grouping across all branch types
All workflows follow standardized concurrency patterns based on their trigger types and criticality.
Used by: Most workflows that run on both main/release branches AND pull requests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}Behavior:
- ✅ Cancels in-progress runs when new commits are pushed to PRs
- ✅ Never cancels runs on main branch pushes
- ✅ Never cancels runs on release branch pushes
- ✅ Never cancels runs on scheduled runs
- ✅ Never cancels manual workflow_dispatch runs
Examples: test.yml, build.yml, benchmarking.yml, ui-tests.yml, all lint workflows
Used by: Workflows that ONLY run on pull requests
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: trueBehavior:
- ✅ Always cancels in-progress runs (safe since they only run on PRs)
- ✅ Provides immediate feedback on latest changes
Examples: danger.yml, api-stability.yml, changes-in-high-risk-code.yml
Used by: Utility workflows with specific requirements
concurrency:
group: "auto-update-tools"
cancel-in-progress: trueExample: auto-update-tools.yml (uses fixed group name for global coordination)
- Standard:
${{ github.workflow }}-${{ github.ref }} - Benefits:
- Unique per workflow and branch/PR
- Consistent across all workflow types
- Works with main, release, and feature branches
- Handles PRs and direct pushes uniformly
- Simpler logic - No conditional expressions needed
- Consistent behavior - Same pattern works for all trigger types
- Per-branch grouping - Natural grouping by branch without special cases
- Better maintainability - Single pattern to understand and maintain
Before:
cancel-in-progress: ${{ !(github.event_name == 'push' && github.ref == 'refs/heads/main') && github.event_name != 'schedule' }}After:
cancel-in-progress: ${{ github.event_name == 'pull_request' }}Why simplified:
- ✅ Much more readable and maintainable
- ✅ Functionally identical behavior
- ✅ Clear intent: "only cancel on pull requests"
- ✅ Less prone to errors
Examples: benchmarking.yml, ui-tests.yml
- Use conditional cancellation to protect expensive main branch runs
- Include detailed comments explaining resource considerations
- May include special cleanup steps (e.g., SauceLabs job cancellation)
Examples: All lint workflows, danger.yml
- Use appropriate cancellation strategy based on trigger scope
- Focus on providing quick feedback on latest changes
Examples: test.yml, build.yml, release.yml
- Never cancel on main/release branches to maintain quality gates
- Ensure complete validation of production-bound code
Each workflow's concurrency block must include comments explaining:
- Purpose - Why concurrency control is needed for this workflow
- Resource considerations - Any expensive operations (SauceLabs, device time, etc.)
- Branch protection logic - Why main/release branches need complete runs
- User experience - How the configuration improves feedback timing
# Concurrency configuration:
# - We use workflow-specific concurrency groups to prevent multiple benchmark runs on the same code,
# as benchmarks are extremely resource-intensive and require dedicated device time on SauceLabs.
# - For pull requests, we cancel in-progress runs when new commits are pushed to avoid wasting
# expensive external testing resources and provide timely performance feedback.
# - For main branch pushes, we never cancel benchmarks to ensure we have complete performance
# baselines for every main branch commit, which are critical for performance regression detection.
concurrency:
group: ${{ github.workflow }}-${{ github.ref }}
cancel-in-progress: ${{ github.event_name == 'pull_request' }}Every directory that contains code, tests, or configuration affecting CI should be included in at least one filter pattern.
Files should be grouped with workflows they logically affect:
- Source changes → Build and test workflows
- Test changes → Test workflows
- Configuration changes → Relevant validation workflows
- Script changes → Workflows using those scripts
Use glob patterns (**) to capture all subdirectories and their contents recursively.
Before submitting a PR that affects project structure:
- List all new or renamed directories
- Check if each directory appears in
.github/file-filters.yml - Add missing patterns to appropriate filter groups
- Verify glob patterns match intended files
- Test locally using the
dorny/paths-filteraction logic
✅ Good:
- "Sources/**" # Matches all files under Sources/
- "Tests/**" # Matches all files under Tests/
- "SentryTestUtils/**" # Matches all files under SentryTestUtils/❌ Bad:
- "Sources/*" # Only matches one level deep
- "Tests/" # Doesn't match files, only directory✅ Good:
- "Samples/iOS-Cocoapods-*/**" # Matches multiple specific samples
- "**/*.xctestplan" # Matches test plans anywhere
- "scripts/ci-*.sh" # Matches CI scripts specifically❌ Bad:
- "Samples/**" # Too broad, includes unrelated samples
- "**/*" # Matches everything (defeats the purpose)Always include configuration files that affect the workflow:
run_unit_tests_for_prs: &run_unit_tests_for_prs
- "Sources/**"
- "Tests/**"
# GH Actions - Changes to these workflows should trigger tests
- ".github/workflows/test.yml"
- ".github/file-filters.yml"
# Project files - Changes to project structure should trigger tests
- "Sentry.xcodeproj/**"
- "Sentry.xcworkspace/**"These are complete, production-ready filter patterns for common workflow types. Use these as templates when adding new workflows or ensuring proper coverage.
Required coverage: All test-related directories (Tests, SentryTestUtils, SentryTestUtilsDynamic, SentryTestUtilsTests) must be included to ensure changes to test infrastructure trigger test runs.
run_unit_tests_for_prs: &run_unit_tests_for_prs
- "Sources/**" # Source code changes
- "Tests/**" # Test changes
- "SentryTestUtils/**" # Test utility changes
- "SentryTestUtilsDynamic/**" # Dynamic test utilities
- "SentryTestUtilsTests/**" # Test utility tests
- ".github/workflows/test.yml" # Workflow definition
- ".github/file-filters.yml" # Filter changes
- "scripts/ci-*.sh" # CI scripts
- "test-server/**" # Test infrastructure
- "**/*.xctestplan" # Test plans
- "Plans/**" # Test plan directory
- "Sentry.xcodeproj/**" # Project structurerun_lint_swift_formatting_for_prs: &run_lint_swift_formatting_for_prs
- "**/*.swift" # All Swift files
- ".github/workflows/lint-swift-formatting.yml"
- ".github/file-filters.yml"
- ".swiftlint.yml" # Linter config
- "scripts/.swiftlint-version" # Version configrun_build_for_prs: &run_build_for_prs
- "Sources/**" # Source code
- "Samples/**" # Sample projects
- ".github/workflows/build.yml"
- ".github/file-filters.yml"
- "Sentry.xcodeproj/**" # Project files
- "Package*.swift" # SPM config
- "scripts/sentry-xcodebuild.sh" # Build script-
Check the paths-filter configuration in the workflow:
- uses: dorny/paths-filter@v3 id: changes with: filters: .github/file-filters.yml
-
Verify the filter name matches between
file-filters.ymland workflow:# In file-filters.yml run_unit_tests_for_prs: &run_unit_tests_for_prs # In workflow if: steps.changes.outputs.run_unit_tests_for_prs == 'true'
-
Test the pattern locally using glob matching tools
Common issues:
- Missing
**for recursive matching - Using
*instead of**for deep directories - Forgetting to include file extensions
- Pattern too broad or too narrow
Periodically review file-filters.yml to:
- Remove patterns for deleted directories
- Update patterns for renamed directories
- Ensure new directories are covered
- Verify patterns match current structure
Each filter group should have comments explaining:
- What the filter is for
- Which workflow uses it
- Special considerations
When updating file-filters.yml:
- Create a test PR with changes in the new pattern
- Verify the expected workflow triggers
- Check that unrelated workflows don't trigger
- Review the GitHub Actions logs for filter results
When reviewing PRs that add/move/rename directories:
-
Identify all affected directories
gh pr view --json files --jq '.files[].path' | cut -d'/' -f1-2 | sort | uniq
-
Check each directory against file-filters.yml
grep -r "DirectoryName" .github/file-filters.yml -
Add missing patterns to appropriate filter groups
-
Verify the changes trigger correct workflows
Consider adding a script that:
- Detects new top-level directories
- Checks if they appear in file-filters.yml
- Warns in PR if missing coverage
Example location: .github/workflows/check-file-filters.yml