chore: Update Python version compatibility to 3.10-3.14#138
Conversation
Implement a factory function that creates RunPod serverless handlers, eliminating code duplication across generated handler files. The generic_handler module provides: - create_handler(function_registry) factory that accepts a dict of function/class objects and returns a RunPod-compatible handler - Automatic serialization/deserialization using cloudpickle + base64 - Support for both function execution and class instantiation + method calls - Structured error responses with full tracebacks for debugging - Load manifest for cross-endpoint function discovery This design centralizes all handler logic in one place, making it easy to: - Fix bugs once, benefit all handlers - Add new features without regenerating projects - Keep deployment packages small (handler files are ~23 lines each) Implementation: - deserialize_arguments(): Base64 + cloudpickle decoding - serialize_result(): Cloudpickle + base64 encoding - execute_function(): Handles function vs. class execution - load_manifest(): Loads flash_manifest.json for service discovery
…uild process Implement the build pipeline components that work together to generate serverless handlers from @Remote decorated functions. Three core components: 1. RemoteDecoratorScanner (scanner.py) - Uses Python AST to discover all @Remote decorated functions - Extracts function metadata: name, module, async status, is_class - Groups functions by resource_config for handler generation - Handles edge cases like decorated classes and async functions 2. ManifestBuilder (manifest.py) - Groups functions by their resource_config - Creates flash_manifest.json structure for service discovery - Maps functions to their modules and handler files - Enables cross-endpoint function routing at runtime 3. HandlerGenerator (handler_generator.py) - Creates lightweight handler_*.py files for each resource config - Each handler imports functions and registers them in FUNCTION_REGISTRY - Handler delegates to create_handler() factory from generic_handler - Generated handlers are ~23 lines (vs ~98 with duplication) Build Pipeline Flow: 1. Scanner discovers @Remote functions 2. ManifestBuilder groups them by resource_config 3. HandlerGenerator creates handler_*.py for each group 4. All files + manifest bundled into archive.tar.gz This eliminates ~95% duplication across handlers by using the factory pattern instead of template-based generation.
Implement 19 unit tests covering all major paths through the generic_handler factory and its helper functions. Test Coverage: Serialization/Deserialization (7 tests): - serialize_result() with simple values, dicts, lists - deserialize_arguments() with empty, args-only, kwargs-only, mixed inputs - Round-trip encoding/decoding of cloudpickle + base64 Function Execution (4 tests): - Simple function execution with positional and keyword arguments - Keyword argument handling - Class instantiation and method calls - Argument passing to instance methods Handler Factory (8 tests): - create_handler() returns callable RunPod handler - Handler with simple function registry - Missing function error handling (returns error response, not exception) - Function exceptions caught with traceback included - Multiple functions in single registry - Complex Python objects (classes, lambdas, closures) - Empty registry edge case - Default execution_type parameter - None return values - Correct RunPod response format (success, result/error, traceback) Test Strategy: - Arrange-Act-Assert pattern for clarity - Isolated unit tests (no external dependencies) - Tests verify behavior, not implementation - Error cases tested for proper error handling - All serialization tested for round-trip correctness All tests passing, 83% coverage on generic_handler.py
…canning Implement integration tests validating the build pipeline components work correctly together. Test Coverage: HandlerGenerator Tests: - Handler files created with correct names (handler_<resource_name>.py) - Generated files import required functions from workers - FUNCTION_REGISTRY properly formatted - create_handler() imported from generic_handler - Handler creation via factory - RunPod start call present and correct - Multiple handlers generated for multiple resource configs ManifestBuilder Tests: - Manifest structure with correct version and metadata - Resources grouped by resource_config - Handler file paths correct - Function metadata preserved (name, module, is_async, is_class) - Function registry mapping complete ScannerTests: - @Remote decorated functions discovered via AST - Function metadata extracted correctly - Module paths resolved properly - Async functions detected - Class methods detected - Edge cases handled (multiple decorators, nested classes) Test Strategy: - Integration tests verify components work together - Tests verify generated files are syntactically correct - Tests validate data structures match expected schemas - No external dependencies in build process Validates that the entire build pipeline: 1. Discovers functions correctly 2. Groups them appropriately 3. Generates valid Python handler files 4. Creates correct manifest structure
Add comprehensive architecture documentation explaining why the factory pattern was chosen and how it works. Documentation includes: Overview & Context: - Problem statement: Handler files had 95% duplication - Design decision: Use factory function instead of templates - Benefits: Single source of truth, easier maintenance, consistency Architecture Diagrams (MermaidJS): - High-level flow: @Remote functions → Scanner → Manifest → Handlers → Factory - Component relationships: HandlerGenerator, GeneratedHandler, generic_handler - Function registry pattern: Discovery → Grouping → Registration → Factory Implementation Details: - create_handler(function_registry) signature and behavior - deserialize_arguments(): Base64 + cloudpickle decoding - serialize_result(): Cloudpickle + base64 encoding - execute_function(): Function vs. class execution - load_manifest(): Service discovery via flash_manifest.json Design Decisions (with rationale): - Factory Pattern over Inheritance: Simpler, less coupling, easier to test - CloudPickle + Base64: Handles arbitrary objects, safe JSON transmission - Manifest in Generic Handler: Runtime service discovery requirement - Structured Error Responses: Debugging aid, functional error handling - Both Execution Types: Supports stateful classes and pure functions Usage Examples: - Simple function handler - Class execution with methods - Multiple functions in one handler Build Process Integration: - 4-phase pipeline: Scanner → Grouping → Generation → Packaging - Manifest structure and contents - Generated handler structure (~23 lines) Testing Strategy: - 19 unit tests covering all major paths - 7 integration tests verifying handler generation - Manual testing with example applications Performance: - Zero runtime penalty (factory called once at startup) - No additional indirection in request path
Document the flash build command and update CLI README to include it. New Documentation: flash-build.md includes: Usage & Options: - Command syntax: flash build [OPTIONS] - --no-deps: Skip transitive dependencies (faster, smaller archives) - --keep-build: Keep build directory for inspection/debugging - --output, -o: Custom archive name (default: archive.tar.gz) What It Does (5-step process): 1. Discovery: Scan for @Remote decorated functions 2. Grouping: Group functions by resource_config 3. Handler Generation: Create lightweight handler files 4. Manifest Creation: Generate flash_manifest.json 5. Packaging: Create archive.tar.gz for deployment Build Artifacts: - .flash/archive.tar.gz: Deployment package (ready for RunPod) - .flash/flash_manifest.json: Service discovery configuration - .flash/.build/: Temporary build directory Handler Generation: - Explains factory pattern and minimal handler files - Links to Runtime_Generic_Handler.md for details Dependency Management: - Default behavior: Install all dependencies including transitive - --no-deps: Only direct dependencies (when base image has transitive) - Trade-offs explained Cross-Endpoint Function Calls: - Example showing GPU and CPU endpoints - Manifest enables routing automatically Output & Troubleshooting: - Sample build output with progress indicators - Common failure scenarios and solutions - How to debug with --keep-build Next Steps: - Test locally with flash run - Deploy to RunPod - Monitor with flash undeploy list Updated CLI README.md: - Added flash build to command list in sequence - Links to full flash-build.md documentation
Add a new section explaining how the build system works and why the factory pattern reduces code duplication. New Section: Build Process and Handler Generation Explains: How Flash Builds Your Application (5-step pipeline): 1. Discovery: Scans code for @Remote decorated functions 2. Grouping: Groups functions by resource_config 3. Handler Generation: Creates lightweight handler files 4. Manifest Creation: Generates flash_manifest.json for service discovery 5. Packaging: Bundles everything into archive.tar.gz Handler Architecture (with code example): - Shows generated handler using factory pattern - Single source of truth: All handler logic in one place - Easier maintenance: Bug fixes don't require rebuilding projects Cross-Endpoint Function Calls: - Example of GPU and CPU endpoints calling each other - Manifest and runtime wrapper handle service discovery Build Artifacts: - .flash/.build/: Temporary build directory - .flash/archive.tar.gz: Deployment package - .flash/flash_manifest.json: Service configuration Links to detailed documentation: - docs/Runtime_Generic_Handler.md for architecture details - src/tetra_rp/cli/docs/flash-build.md for CLI reference This section bridges the main README and detailed documentation, providing entry point for new users discovering the build system.
Wire up the handler generator, manifest builder, and scanner into the actual flash build command implementation. Changes to build.py: 1. Integration: - Import RemoteDecoratorScanner for function discovery - Import ManifestBuilder for manifest creation - Import HandlerGenerator for handler file creation - Call these in sequence during the build process 2. Build Pipeline: - After copying project files, scan for @Remote functions - Build manifest from discovered functions - Generate handler files for each resource config - Write manifest to build directory - Progress indicators show what's being generated 3. Fixes: - Change .tetra directory references to .flash - Uncomment actual build logic (was showing "Coming Soon" message) - Fix progress messages to show actual file counts 4. Error Handling: - Try/catch around handler generation - Warning shown if generation fails but build continues - User can debug with --keep-build flag Build Flow Now: 1. Load ignore patterns 2. Collect project files 3. Create build directory 4. Copy files to build directory 5. [NEW] Scan for @Remote functions 6. [NEW] Build and write manifest 7. [NEW] Generate handler files 8. Install dependencies 9. Create archive 10. Clean up build directory (unless --keep-build) Dependencies: - Updated uv.lock with all required dependencies
…handling **Critical Fixes:** - Remove "Coming Soon" message blocking build command execution - Fix build directory to use .flash/.build/ directly (no app_name subdirectory) - Fix tarball to extract with flat structure using arcname="." - Fix cleanup to remove correct build directory **Error Handling & Validation:** - Add specific exception handling (ImportError, SyntaxError, ValueError) - Add import validation to generated handlers - Add duplicate function name detection across resources - Add proper error logging throughout build process **Resource Type Tracking:** - Add resource_type field to RemoteFunctionMetadata - Track actual resource types (LiveServerless, CpuLiveServerless) - Use actual types in manifest instead of hardcoding **Robustness Improvements:** - Add handler import validation post-generation - Add manifest path fallback search (cwd, module dir, legacy location) - Add resource name sanitization for safe filenames - Add specific exception logging in scanner (UnicodeDecodeError, SyntaxError) **User Experience:** - Add troubleshooting section to README - Update manifest path documentation in docs - Change "Zero Runtime Penalty" to "Minimal Runtime Overhead" - Mark future enhancements as "Not Yet Implemented" - Improve build success message with next steps Fixes all 20 issues identified in code review (issues #1-13, #19-22)
Implement LoadBalancerSlsResource class for provisioning RunPod load-balanced serverless endpoints. Load-balanced endpoints expose HTTP servers directly to clients without queue-based processing, enabling REST APIs, webhooks, and real-time communication patterns. Key features: - Type enforcement (always LB, never QB) - Scaler validation (REQUEST_COUNT required, not QUEUE_DELAY) - Health check polling via /ping endpoint (200/204 = healthy) - Post-deployment verification with configurable retries - Async and sync health check methods - Comprehensive unit tests - Full documentation with architecture diagrams and examples Architecture: - Extends ServerlessResource with LB-specific behavior - Validates configuration before deployment - Polls /ping endpoint until healthy (10 retries × 5s = 50s timeout) - Raises TimeoutError if endpoint fails to become healthy This forms the foundation for Mothership architecture where a load-balanced endpoint serves as a directory server for child endpoints.
Import ServerlessResource directly and use patch.object on the imported class instead of string-based patches. This ensures the mocks properly intercept the parent class's _do_deploy method when called via super(). Simplifies mock configuration and removes an unused variable assertion. Fixes the three failing deployment tests that were making real GraphQL API calls. All tests now pass: 418 passed, 1 skipped.
…oints Implement core infrastructure for enabling @Remote decorator on LoadBalancerSlsResource endpoints with HTTP method/path routing. Changes: - Create LoadBalancerSlsStub: HTTP-based stub for direct endpoint execution (src/tetra_rp/stubs/load_balancer_sls.py, 170 lines) - Serializes functions and arguments using cloudpickle + base64 - Direct HTTP POST to /execute endpoint (no queue polling) - Proper error handling and deserialization - Register stub with singledispatch (src/tetra_rp/stubs/registry.py) - Enables @Remote to dispatch to LoadBalancerSlsStub for LB resources - Extend @Remote decorator with HTTP routing parameters (src/tetra_rp/client.py) - Add 'method' parameter: GET, POST, PUT, DELETE, PATCH - Add 'path' parameter: /api/endpoint routes - Validate method/path required for LoadBalancerSlsResource - Store routing metadata on decorated functions/classes - Warn if routing params used with non-LB resources Foundation for Phase 2 (Build system integration) and Phase 3 (Local dev).
Update RemoteDecoratorScanner to extract HTTP method and path from @Remote decorator for LoadBalancerSlsResource endpoints. Changes: - Add http_method and http_path fields to RemoteFunctionMetadata - Add _extract_http_routing() method to parse decorator keywords - Extract method (GET, POST, PUT, DELETE, PATCH) from decorator - Extract path (/api/process) from decorator - Store routing metadata for manifest generation Foundation for Phase 2.2 (Manifest updates) and Phase 2.3 (Handler generation).
Enhance ManifestBuilder to support HTTP method/path routing for
LoadBalancerSlsResource endpoints.
Changes:
- Add http_method and http_path fields to ManifestFunction
- Validate LB endpoints have both method and path
- Detect and prevent route conflicts (same method + path)
- Prevent use of reserved paths (/execute, /ping)
- Add 'routes' section to manifest for LB endpoints
- Conditional inclusion of routing fields (only for LB)
Manifest structure for LB endpoints now includes:
{
"resources": {
"api_service": {
"resource_type": "LoadBalancerSlsResource",
"functions": [
{
"name": "process_data",
"http_method": "POST",
"http_path": "/api/process"
}
]
}
},
"routes": {
"api_service": {
"POST /api/process": "process_data"
}
}
}
Implement LBHandlerGenerator to create FastAPI applications for LoadBalancerSlsResource endpoints with HTTP method/path routing. Key features: - Generates FastAPI apps with explicit route registry - Creates (method, path) -> function mappings from manifest - Validates route conflicts and reserved paths - Imports user functions and creates dynamic routes - Includes required /ping health check endpoint - Validates generated handler Python syntax via import Generated handler structure enables: - Direct HTTP routing to user functions via FastAPI - Framework /execute endpoint for @Remote stub execution - Local development with uvicorn
Create create_lb_handler() factory function that dynamically builds FastAPI applications from route registries for LoadBalancerSlsResource endpoints. Key features: - Accepts route_registry: Dict[(method, path)] -> handler_function mapping - Registers all user-defined routes from registry to FastAPI app - Provides /execute endpoint for @Remote stub function execution - Handles async function execution automatically - Serializes results with cloudpickle + base64 encoding - Comprehensive error handling with detailed logging The /execute endpoint enables: - Remote function code execution via @Remote decorator - Automatic argument deserialization from cloudpickle/base64 - Result serialization for transmission back to client - Support for both sync and async functions
Update build command to use appropriate handler generators based on resource type. Separates LoadBalancerSlsResource endpoints (using FastAPI) from queue-based endpoints (using generic handler). Changes: - Import LBHandlerGenerator alongside HandlerGenerator - Inspect manifest resources and separate by type - Generate LB handlers via LBHandlerGenerator - Generate QB handlers via HandlerGenerator - Combine all generated handler paths for summary Enables users to mix LB and QB endpoints in same project with correct code generation for each resource type.
Implement LiveLoadBalancer resource following the LiveServerless pattern for local development and testing of load-balanced endpoints. Changes: - Add TETRA_LB_IMAGE constant for load-balanced Tetra image - Create LiveLoadBalancer class extending LoadBalancerSlsResource - Uses LiveServerlessMixin to lock imageName to Tetra LB image - Register LiveLoadBalancer with LoadBalancerSlsStub in singledispatch - Export LiveLoadBalancer from core.resources and top-level __init__ This enables users to test LB-based functions locally before deploying, using the same pattern as LiveServerless for queue-based endpoints. Users can now write: from tetra_rp import LiveLoadBalancer, remote api = LiveLoadBalancer(name="test-api") @Remote(api, method="POST", path="/api/process") async def process_data(x, y): return {"result": x + y} result = await process_data(5, 3) # Local execution
Implement unit tests for LoadBalancerSlsStub covering: - Request preparation with arguments and dependencies - Response handling for success and error cases - Error handling for invalid responses - Base64 encoding/decoding of serialized data - Endpoint URL validation - Timeout and HTTP error handling Test coverage: - _prepare_request: 4 tests - _handle_response: 5 tests - _execute_function: 3 error case tests - __call__: 2 integration tests Tests verify proper function serialization, argument handling, error propagation, and response deserialization.
Fix test_load_balancer_vs_queue_based_endpoints by updating the @Remote decorator to use method='POST' and path='/api/echo' to match the test assertions. This was a test-level bug where the decorator definition didn't match what was being asserted.
…ndpoints - Using_Remote_With_LoadBalancer.md: User guide for HTTP routing, local development, building and deploying - LoadBalancer_Runtime_Architecture.md: Technical details on deployment, request flows, security, and performance - Updated README.md with LoadBalancer section and code example - Updated Load_Balancer_Endpoints.md with cross-references to new guides
Split @Remote execution behavior between local and deployed: - LiveLoadBalancer (local): Uses /execute endpoint for function serialization - LoadBalancerSlsResource (deployed): Uses user-defined routes with HTTP param mapping Changes: 1. LoadBalancerSlsStub routing detection: - _should_use_execute_endpoint() determines execution path - _execute_via_user_route() maps args to JSON and POSTs to user routes - Auto-detects resource type and routing metadata 2. Conditional /execute registration: - create_lb_handler() now accepts include_execute parameter - Generated handlers default to include_execute=False (security) - LiveLoadBalancer can enable /execute if needed 3. Updated handler generator: - Added clarity comments on /execute exclusion for deployed endpoints 4. Comprehensive test coverage: - 8 new tests for routing detection and execution paths - All 31 tests passing (22 unit + 9 integration) 5. Documentation updates: - Using_Remote_With_LoadBalancer.md: clarified /execute scope - Added 'Local vs Deployed Execution' section explaining differences - LoadBalancer_Runtime_Architecture.md: updated execution model - Added troubleshooting for deployed endpoint scenarios Security improvement: - Deployed endpoints only expose user-defined routes - /execute endpoint removed from production (prevents arbitrary code execution) - Lower attack surface for deployed endpoints
…lude /execute endpoint - Modified manifest.py to validate LiveLoadBalancer endpoints like LoadBalancerSlsResource - Updated lb_handler_generator to: - Include LiveLoadBalancer in handler generation filter - Pass include_execute=True for LiveLoadBalancer (local dev) - Pass include_execute=False for LoadBalancerSlsResource (deployed) - Added integration tests: - Verify LiveLoadBalancer handlers include /execute endpoint - Verify deployed handlers exclude /execute endpoint - Fixes critical bug: LiveLoadBalancer now gets /execute endpoint in generated handlers
…ss resources - Updated scanner to extract LiveLoadBalancer and LoadBalancerSlsResource resources - Previously only looked for 'Serverless' in class name, missing LoadBalancer endpoints - Now checks for both 'Serverless' and 'LoadBalancer' in resource type names - Added integration test to verify scanner discovers both resource types - Fixes critical bug that prevented flash build from finding LoadBalancer endpoints
- Wrap long lines in manifest.py, lb_handler.py, and load_balancer_sls.py - Remove unused httpx import in test_load_balancer_sls_stub.py - Apply consistent formatting across codebase
- Scanner: Use exact type name matching instead of substring matching - Whitelist specific resource types to avoid false positives - Prevents matching classes like 'MyServerlessHelper' or 'LoadBalancerUtils' - Type hints: Use Optional[str] for nullable fields in manifest - ManifestFunction.http_method and http_path now properly typed - Timeout: Make HTTP client timeout configurable - Added LoadBalancerSlsStub.DEFAULT_TIMEOUT class attribute - Added timeout parameter to __init__ - Updated both _execute_function and _execute_via_user_route to use self.timeout - Deprecated datetime: Replace datetime.utcnow() with datetime.now(timezone.utc) - Updated manifest.py and test_lb_remote_execution.py - Ensures Python 3.12+ compatibility
The set_serverless_template model_validator was being overwritten by sync_input_fields (both had mode="after"). In Pydantic v2, when two validators with the same mode are defined in a class, only one is registered. This caused templates to never be created from imageName, resulting in: "GraphQL errors: One of templateId, template is required to create an endpoint" Solution: - Move set_serverless_template validator from ServerlessResource base class to subclasses (ServerlessEndpoint and LoadBalancerSlsResource) where the validation is actually needed - Keep helper methods (_create_new_template, _configure_existing_template) in base class for reuse - Add comprehensive tests for LiveLoadBalancer template serialization This allows: 1. Base ServerlessResource to be instantiated freely for testing/configuration 2. Subclasses (ServerlessEndpoint, LoadBalancerSlsResource) to enforce template requirements during deployment 3. Proper template serialization in GraphQL payload for RunPod API Fixes: One of templateId, template is required to create an endpoint error when deploying LiveLoadBalancer with custom image tags like runpod/tetra-rp-lb:local
- Fix: Use correct endpoint URL format for load-balanced endpoints
(https://{id}.api.runpod.ai instead of https://api.runpod.ai/v2/{id})
This fixes 404 errors on /ping health check endpoints
- Feature: Add CPU LoadBalancer support
* Create CpuLoadBalancerSlsResource for CPU-based load-balanced endpoints
* Create CpuLiveLoadBalancer for local CPU LB development
* Add TETRA_CPU_LB_IMAGE constant for CPU LB Docker image
* Update example code to use CpuLiveLoadBalancer for CPU worker
* Add 8 comprehensive tests for CPU LoadBalancer functionality
- Tests: Add 2 tests for endpoint URL format validation
- All 474 tests passing, 64% code coverage
…etra_rp package LoadBalancer resources were not being discovered by ResourceDiscovery because the new CPU variants (CpuLiveLoadBalancer, CpuLoadBalancerSlsResource) were not exported from the main tetra_rp package. This prevented undeploy from picking up these resources. Added exports to: - TYPE_CHECKING imports for type hints - __getattr__ function for lazy loading - __all__ list for public API This fixes the issue where 'flash undeploy list' could not find LoadBalancer resources that were deployed with 'flash run --auto-provision'.
…alancer-sls-resource
…-1196-absolute-drift-detection
The scanner now extracts resource names from the name= parameter rather
than using variable names. Update test assertions to expect the actual
resource names ('test-api', 'deployed-api') instead of variable names.
…-1196-absolute-drift-detection
Use saveEndpoint mutation for all changes instead of manual lifecycle management. Server-side automatically detects version-triggering fields (GPU, template, volumes) and increments endpoint version accordingly. Keep _has_structural_changes() as informational for logging purposes only. This aligns with RunPod API's version-based deployment model.
…ng changes Update docstring to reflect that this method identifies changes that trigger server-side version increment and worker recreation, not manual redeploy cycles. Explain which changes are version-triggering vs rolling updates, and note that the method is now informational for logging only.
Remove env from EXCLUDED_HASH_FIELDS so changes to environment variables trigger drift detection and endpoint updates. Environment changes are non-version-triggering (rolling updates), so server will apply them via saveEndpoint without recreating workers. Add env to CPU LoadBalancer config_hash for consistent behavior across all resource types. Update comments to reflect that env is user-specified configuration, not dynamically computed.
- test_lb_config_hash_excludes_env_variables → test_lb_config_hash_detects_env_changes - test_env_var_changes_no_drift → test_env_var_changes_trigger_drift - test_config_hash_excludes_env_from_drift → test_config_hash_detects_env_from_drift Update assertions to expect different hashes when env changes, matching new behavior where environment variable changes trigger drift and updates.
- Fix type annotation for timeout parameter in LoadBalancerSlsStub (Optional[float]) - Replace hardcoded "30s" with actual self.timeout in error messages (2 locations) - Update Resource_Config_Drift_Detection.md to reflect actual EXCLUDED_HASH_FIELDS - Remove duplicate Load-Balanced Endpoints section from README.md Addresses Copilot review comments (PR #132, review 3642596664)
- Drop Python 3.9 support (EOL) - Ensure support for Python 3.14 - Update requires-python in pyproject.toml from >=3.9,<3.14 to >=3.10,<3.15 - Update mypy python_version from 3.9 to 3.10 - Update CI matrix to test Python 3.10, 3.11, 3.12, 3.13, 3.14
There was a problem hiding this comment.
Pull request overview
This PR updates Python version compatibility by dropping Python 3.9 support and adding Python 3.14, reflecting the October 2024 EOL of Python 3.9 and expanding support to the latest Python version. The minimum required Python version has been updated to 3.10 across project configuration files, and code coverage requirements have been increased from 35% to 65%.
Changes:
- Updated Python version support range from 3.9-3.13 to 3.10-3.14
- Increased minimum code coverage requirement from 35% to 65%
- Updated test behavior to detect environment variable drift (previously excluded from drift detection)
Reviewed changes
Copilot reviewed 7 out of 8 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| pyproject.toml | Updated requires-python to ">=3.10,<3.15", python_version to "3.10", and cov-fail-under to 65 |
| .github/workflows/ci.yml | Updated test matrix to test Python 3.10-3.14 instead of 3.9-3.13 |
| tests/unit/resources/test_resource_manager.py | Modified test to verify env changes now trigger drift detection |
| tests/unit/resources/test_load_balancer_drift.py | Added comprehensive drift detection test suite (479 lines) |
| src/tetra_rp/core/resources/serverless.py | Added RUNTIME_FIELDS and EXCLUDED_HASH_FIELDS class variables; updated config_hash to exclude runtime fields while including env; enhanced field serializers with defensive checks; updated update() method documentation |
| src/tetra_rp/core/resources/load_balancer_sls_resource.py | Added CPU-specific config_hash implementation with implementation notes |
| docs/Resource_Config_Drift_Detection.md | Added comprehensive documentation for drift detection feature (361 lines) |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
…310-314-compatibility
Remove unused functions and improve test coverage: - Remove deprecated update_system_dependencies from template.py - Remove unused utility functions from utils.py and json.py - Add comprehensive test suite for serialization module (100% coverage) Tests cover serialization/deserialization of args, kwargs, and error handling for cloudpickle failures across Python 3.10-3.14.
There was a problem hiding this comment.
Pull request overview
Copilot reviewed 6 out of 7 changed files in this pull request and generated no new comments.
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
Summary
Changes
requires-pythonin pyproject.toml from>=3.9,<3.14to>=3.10,<3.15python_versionfrom3.9to3.10Testing