feat: add network column to schema for multi-network support#463
feat: add network column to schema for multi-network support#463silent-cipher wants to merge 6 commits into
Conversation
There was a problem hiding this comment.
Pull request overview
Adds network scoping to core runtime tables so a single backend instance can operate against both calibration and mainnet without address/deal collisions.
Changes:
- Introduces
networkcolumns and updates PK/FK/unique constraints to be network-aware (migration + entity updates). - Updates several upsert/conflict targets to include
networkfor correct deduplication across chains. - Adds
SUPPORTED_NETWORKSconstant and updates unit tests to reflect conflict key changes.
Reviewed changes
Copilot reviewed 12 out of 12 changed files in this pull request and generated 7 comments.
Show a summary per file
| File | Description |
|---|---|
| apps/backend/src/database/migrations/1776790420000-AddNetworkColumn.ts | Adds network columns, updates PK/FK/unique constraints and supporting indexes for multi-network isolation. |
| apps/backend/src/database/entities/storage-provider.entity.ts | Makes StorageProvider primary key composite (address, network). |
| apps/backend/src/database/entities/deal.entity.ts | Adds network column and updates SP relation join to (sp_address, network). |
| apps/backend/src/database/entities/job-schedule-state.entity.ts | Adds network column (but currently leaves unique index definition misaligned with new schema). |
| apps/backend/src/database/entities/data-retention-baseline.entity.ts | Makes baseline primary key composite (provider_address, network). |
| apps/backend/src/wallet-sdk/wallet-sdk.service.ts | Updates provider upsert conflict target to include network. |
| apps/backend/src/wallet-sdk/wallet-sdk.service.spec.ts | Updates unit test to expect network-aware conflict paths. |
| apps/backend/src/jobs/repositories/job-schedule.repository.ts | Updates job schedule upsert conflict target to include network. |
| apps/backend/src/data-retention/data-retention.service.ts | Updates baseline upsert conflict columns to include network. |
| apps/backend/src/data-retention/data-retention.service.spec.ts | Updates unit test to expect network-aware conflict columns. |
| apps/backend/src/common/constants.ts | Adds SUPPORTED_NETWORKS constant used by the migration. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| faultedPeriods: "10", | ||
| successPeriods: "90", | ||
| lastBlockNumber: "1200", | ||
| }, | ||
| ["providerAddress"], | ||
| ["providerAddress", "network"], |
There was a problem hiding this comment.
This test updates the conflict columns to include network, but it still asserts an upsert payload that omits network. To prevent cross-network baseline collisions, update the test to expect network to be set on the persisted baseline (using the configured blockchain network).
| INSERT INTO job_schedule_state (job_type, sp_address, interval_seconds, next_run_at) | ||
| VALUES ($1, $2, $3, $4) | ||
| ON CONFLICT (job_type, sp_address) DO UPDATE | ||
| ON CONFLICT (job_type, sp_address, network) DO UPDATE | ||
| SET interval_seconds = EXCLUDED.interval_seconds, | ||
| paused = job_schedule_state.paused, |
There was a problem hiding this comment.
This schedule upsert uses ON CONFLICT (job_type, sp_address, network) but the INSERT statement doesn’t provide a network value, so every row will be written under the DB default network. That prevents separate schedules per chain in multi-network mode. Include network in the INSERT columns/values and accept/pass it as a parameter (or otherwise derive it) so schedules are stored per network.
There was a problem hiding this comment.
I believe this still neds fixed
| faultedPeriods: baseline.faultedPeriods.toString(), | ||
| successPeriods: baseline.successPeriods.toString(), | ||
| lastBlockNumber: blockNumber.toString(), | ||
| }, | ||
| ["providerAddress"], | ||
| ["providerAddress", "network"], |
There was a problem hiding this comment.
persistBaseline() now upserts on (providerAddress, network) but the upsert payload does not include network. That means every baseline row will be written under the default network, and baselines from different chains will collide/overwrite. Add network to the upserted entity (from the configured blockchain network) and ensure any in-memory baseline map keys are network-scoped as well so deltas don’t mix across chains.
SgtPooki
left a comment
There was a problem hiding this comment.
Looks like there are just a few more gaps, and we need to make sure runtime reads pull from the correct network-scoped records. Pass network in the relevant DB read functions and make sure it’s handled in the appropriate repository/query logic.
| faultedPeriods: "10", | ||
| successPeriods: "90", | ||
| lastBlockNumber: "1200", | ||
| }, | ||
| ["providerAddress"], | ||
| ["providerAddress", "network"], |
| faultedPeriods: baseline.faultedPeriods.toString(), | ||
| successPeriods: baseline.successPeriods.toString(), | ||
| lastBlockNumber: blockNumber.toString(), | ||
| }, | ||
| ["providerAddress"], | ||
| ["providerAddress", "network"], |
| @PrimaryColumn({ name: "network", type: "text" }) | ||
| network!: Network; | ||
|
|
There was a problem hiding this comment.
can we use an enum instead?
| @PrimaryColumn({ name: "network", type: "text" }) | |
| network!: Network; | |
| @PrimaryColumn({ | |
| name: "network", | |
| type: "enum", | |
| enum: Network, | |
| }) | |
| network!: Network; | |
| @Column({ name: "network", type: "text" }) | ||
| network: Network; |
There was a problem hiding this comment.
ditto for enum if possible
| @Column({ name: "network", type: "text" }) | ||
| network!: Network; |
There was a problem hiding this comment.
ditto for enum if possible
| @PrimaryColumn({ name: "network", type: "text" }) | ||
| network!: Network; |
| * Backfill strategy: existing rows are assigned the network declared by the | ||
| * operator via `DEALBOT_LEGACY_NETWORK_BACKFILL` (preferred) or the legacy | ||
| * `NETWORK` env var. The migration fails fast if neither is set to a supported | ||
| * network, so backfill is never silently wrong. Operators later expanding a | ||
| * single-network deployment to an additional network must update their | ||
| * `NETWORKS` config and re-run a providers_refresh to populate the | ||
| * network-scoped rows for the newly added network. |
There was a problem hiding this comment.
can we get a specific runbook for operators to use to handle the migration to multi-network?
SgtPooki
left a comment
There was a problem hiding this comment.
I couldn't see anything wrong at first glance, so I threw gemini + claude + codex at it and they uncovered potential gaps.. I'm testing locally and will get back with findings.
| private mapJobPayload(row: ScheduleRow): SpJobData | ProvidersRefreshJobData | DataRetentionJobData { | ||
| if ( | ||
| row.job_type === "deal" || | ||
| row.job_type === "retrieval" || | ||
| row.job_type === "data_set_creation" || | ||
| row.job_type === "piece_cleanup" | ||
| ) { | ||
| return { jobType: row.job_type, spAddress: row.sp_address, intervalSeconds: row.interval_seconds }; | ||
| } | ||
| return { intervalSeconds: row.interval_seconds }; | ||
| } |
There was a problem hiding this comment.
we likely need network here as well
| if (isSpJobType(jobType)) { | ||
| const spData = data as SpJobData; | ||
| if (!finalOptions.singletonKey) { | ||
| finalOptions.singletonKey = spData.spAddress; |
There was a problem hiding this comment.
we likely need to update singleTonKey to spAddress + network?
|
I'm finding it a bit tricky to define the scope of this pr and where to draw the boundary for changes. My initial thought was to split the work between this pr and #468 so we could roll out smaller prs. Would you prefer that approach, or should I instead consolidate everything from #468 into this pr and make it a single change? |
|
@silent-cipher yea, i think the idea of a schema-only PR change would be ideal, and keeping PRs small, but this PR already does some application level piping through of data. as a minimum, I think we need #510 on top of this (or similar if you recommend differently) I don't want to disrupt you too much, so let me know what you think would be best |
|
I’m thinking of splitting this into three PRs to keep things small and backward compatible -
|
Summary
Introduces a
networkcolumn to the core runtime tables so a single dealbot instance can safely operate across bothcalibrationandmainnetwithout collisions between same-address providers/deals on different chains.Step towards #297