Skip to content

feat(jsonrpc): add resource restrict for jsonrpc#69

Open
317787106 wants to merge 13 commits intodevelopfrom
hotfix/restrict_jsonrpc_size
Open

feat(jsonrpc): add resource restrict for jsonrpc#69
317787106 wants to merge 13 commits intodevelopfrom
hotfix/restrict_jsonrpc_size

Conversation

@317787106
Copy link
Copy Markdown
Owner

@317787106 317787106 commented Apr 16, 2026

What does this PR do?

Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes tronprotocol#6632

Changes:

  1. Batch size limit (node.jsonrpc.maxBatchSize, default: 100)

    • Validates the array length of batch JSON-RPC requests before dispatching.
    • Requests exceeding the limit are rejected with error code -32005 (exceed limit).
    • The check is skipped when maxBatchSize ≤ 0 (no limit).
  2. Response size limit (node.jsonrpc.maxResponseSize, default: 25 MB)

    • Introduces BufferedResponseWrapper: intercepts getOutputStream() and getWriter() writes into an in-memory buffer. When a write would exceed the configured limit, it sets an overflow flag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at most maxResponseSize.
    • Introduces CachedBodyRequestWrapper: replays the pre-read request body via both getInputStream() and getReader(), so the body can be inspected before being forwarded to JsonRpcServer.
    • After the handler returns, the servlet checks isOverflow() and — if set — discards the partial buffer and returns error code -32003 (response too large).
  3. Address list limit (node.jsonrpc.maxAddressSize, default: 1000)

    • In LogFilter, validates the address array length in eth_getLogs / eth_newFilter requests.
    • Requests exceeding the limit are rejected with JsonRpcInvalidParamsException.
  4. Structured JSON-RPC error responses

    • writeJsonRpcError uses ObjectMapper to build error responses safely, avoiding JSON injection from error messages.
    • Error codes follow the JSON-RPC 2.0 spec: -32700 parse error, -32005 exceed limit, -32003 response too large.

Why are these changes required?

  • Without limits, a client can send an arbitrarily large batch, trigger an expensive query with many addresses, or force the node to serialize a massive response — all of which cause unbounded memory growth.
  • The response buffer caps worst-case allocation to maxResponseSize and fails fast rather than buffering the entire response before checking.

Configuration

node {
  jsonrpc {
    # Max JSON-RPC batch array size; 0 = no limit
    maxBatchSize = 100
    # Max response body in bytes (default 25 MB)
    maxResponseSize = 26214400
    # Max address entries in eth_getLogs / eth_newFilter
    maxAddressSize = 1000
  }
}

This PR has been tested by:

  • Unit tests (BufferedResponseWrapperTest)
  • Manual testing

@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Apr 16, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

This PR implements JSON-RPC resource constraints through configuration-driven limits: batch size validation during request parsing, incremental response size buffering with overflow detection, and address count validation in filter requests. Request and response wrapper classes enable non-destructive buffering and early rejection when limits are breached.

Changes

Cohort / File(s) Summary
Configuration Parameters
common/src/main/java/org/tron/common/parameter/CommonParameter.java, common/src/main/java/org/tron/core/config/args/NodeConfig.java, common/src/main/resources/reference.conf, framework/src/main/resources/config.conf
Added three configurable JSON-RPC limits: maxBatchSize (default 100), maxResponseSize (default 25 MB), and maxAddressSize (default 1000) across configuration layers.
Request/Response Buffering
framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java, framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
Introduced two HttpServlet wrapper classes: BufferedResponseWrapper captures output incrementally and flags overflow when size exceeds limit; CachedBodyRequestWrapper allows re-reading the request body multiple times.
JSON-RPC Servlet Enhancement
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
Modified doPost to validate batch size during parsing, return early JSON-RPC errors for invalid JSON/exceeded batch limits, buffer request/response for size tracking, and return overflow errors with original request IDs when responses exceed limits.
Configuration Bootstrap
framework/src/main/java/org/tron/core/config/args/Args.java
Added propagation of three new JSON-RPC config values from NodeConfig to CommonParameter during initialization.
Filter Validation
framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.java
Added pre-execution validation to reject log filter requests when address count exceeds maxAddressSize, throwing JsonRpcInvalidParamsException before query execution.
Test Coverage
framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java
Added comprehensive test suite validating BufferedResponseWrapper overflow detection via both output writes and content-length setters, commit semantics, and boundary conditions.

Sequence Diagram

sequenceDiagram
    participant Client
    participant JsonRpcServlet
    participant ObjectMapper as JSON Parser
    participant CachedBodyRequestWrapper as CachedBody<br/>Wrapper
    participant RpcServer
    participant BufferedResponseWrapper as BufferedResp<br/>Wrapper
    participant HttpResponse

    Client->>JsonRpcServlet: POST /rpc (JSON payload)
    JsonRpcServlet->>JsonRpcServlet: Read request body
    JsonRpcServlet->>ObjectMapper: Parse JSON
    alt Parse Error
        JsonRpcServlet->>Client: JSON-RPC error (-32700)
    else Array Request
        JsonRpcServlet->>JsonRpcServlet: Check batch size
        alt Batch exceeds limit
            JsonRpcServlet->>Client: JSON-RPC error (-32005)
        else Within limit
            JsonRpcServlet->>CachedBodyRequestWrapper: Wrap request with cached body
            JsonRpcServlet->>BufferedResponseWrapper: Wrap response with max size limit
            JsonRpcServlet->>RpcServer: handle(cachedReq, bufferedResp)
            RpcServer->>BufferedResponseWrapper: Write response data
            BufferedResponseWrapper->>BufferedResponseWrapper: Track bytes written
            alt Response overflows limit
                BufferedResponseWrapper->>BufferedResponseWrapper: Mark overflow=true
            end
            JsonRpcServlet->>BufferedResponseWrapper: commitToResponse()
            alt Overflow detected
                JsonRpcServlet->>Client: JSON-RPC error (-32003)
            else Success
                BufferedResponseWrapper->>HttpResponse: Write buffered response
                HttpResponse->>Client: Response
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~50 minutes

Possibly related issues

Poem

🐰 Hops with glee through buffered streams,
Batch checks halt the oversized dreams,
Response limits now hold their sway,
Addresses counted to save the day,
JSON-RPC flows with grace and care!

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (2 warnings)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR implements most core requirements from #6632: batch size limits with early validation (-32005 error), response size limits via BufferedResponseWrapper (-32003 error), and address limits in LogFilter (JsonRpcInvalidParamsException/-32602). However, request timeout control (maxRequestTimeout/node.jsonrpc.maxRequestTimeout) is not implemented. Implement request execution timeout control as specified in issue #6632 to fully meet all linked issue requirements; timeout violations should map to error code -32002.
Docstring Coverage ⚠️ Warning Docstring coverage is 21.74% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title 'feat(jsonrpc): add resource restrict for jsonrpc' accurately summarizes the main change—adding resource restrictions (limits on batch size, response size, and address size) to the JSON-RPC service.
Out of Scope Changes check ✅ Passed All code changes are within scope of the linked issue #6632: configuration additions, parameter classes, batch/response/address validation logic, wrapper classes for request/response handling, and unit tests directly address the stated objectives.
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch hotfix/restrict_jsonrpc_size

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

3 issues found across 9 files

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:129">
P2: Race condition after timeout: `future.cancel(true)` only sends an interrupt; the RPC handler may still be running. Since `BufferedResponseWrapper` doesn't override header methods (`setStatus`, `setContentType`, etc.), the handler thread writes headers directly to the underlying `resp` concurrently with `writeJsonRpcError`. This can corrupt the HTTP response. Consider either wrapping header methods in `BufferedResponseWrapper` too, or awaiting actual thread termination before writing the error.</violation>

<violation number="2" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:155">
P1: No limit on request body size. `readBody` loads the entire input stream into memory before any validation runs. A malicious client can send a multi-gigabyte body to cause OOM. Add a maximum request body size check (e.g., reject if total bytes read exceeds a configured limit).</violation>

<violation number="3" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:168">
P1: The `message` string is concatenated into the JSON response without JSON-escaping. If `message` contains `"`, `\`, or control characters, the output is malformed JSON. Use the `ObjectMapper` (already available as `MAPPER`) to build the JSON safely, or at minimum escape the string.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
Comment thread framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java Outdated
@317787106 317787106 changed the title feat(jsonrpc): add resource restirct for jsonrpc feat(jsonrpc): add resource restrict for jsonrpc Apr 19, 2026
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 4

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 80-105: BufferedResponseWrapper currently overrides
getOutputStream() but not getWriter(), so callers using getWriter() bypass the
buffered size-checked stream; override getWriter() in BufferedResponseWrapper to
return a PrintWriter that wraps an OutputStreamWriter which writes into the same
buffering/size-checking stream used by the existing outputStream field (honoring
actual.getCharacterEncoding() or UTF-8 fallback), ensure the writer delegates
flush/close correctly without directly committing the real response, and reuse
the same buffer/commitToResponse() path so output written via getWriter() is
included in commitToResponse().

In
`@framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java`:
- Around line 21-49: CachedBodyRequestWrapper currently only overrides
getInputStream() to replay the stored byte[] body, so callers using getReader()
will still read the original (consumed) request; add an override for public
BufferedReader getReader() in CachedBodyRequestWrapper that constructs an
InputStreamReader over a new ByteArrayInputStream(body) using
getCharacterEncoding() (falling back to StandardCharsets.UTF_8 if null) and
wraps it in a BufferedReader so all downstream code reading via getReader() gets
the cached body.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Line 89: The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).
- Around line 33-35: The pre-parse steps readBody(...) and MAPPER.readTree(...)
can throw servlet-level exceptions for oversized requests or malformed JSON;
update the JsonRpcError enum to add PARSE_ERROR(-32700) and wrap the calls to
readBody and MAPPER.readTree in a try-catch that converts size-limit violations
into JsonRpcError.EXCEED_LIMIT (-32005) responses and JSON parse failures into
JsonRpcError.PARSE_ERROR (-32700) responses (keeping existing
RESPONSETOO_LARGE/-32003 where appropriate), ensuring the servlet returns proper
JSON-RPC error objects instead of propagating exceptions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 920b17a2-c138-473b-b836-b7d711012802

📥 Commits

Reviewing files that changed from the base of the PR and between 2de63bb and 2dad857.

📒 Files selected for processing (9)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • common/src/main/java/org/tron/core/exception/jsonrpc/JsonRpcResponseTooLargeException.java
  • framework/src/main/java/org/tron/core/config/args/Args.java
  • framework/src/main/java/org/tron/core/config/args/ConfigKey.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.java
  • framework/src/main/resources/config.conf

rpcServer.handle(req, resp);
CommonParameter parameter = CommonParameter.getInstance();

byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize());
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don’t reuse jsonRpcMaxResponseSize as the request body cap.

Line 89 couples accepted request size to the response limit used on Lines 100-101. If operators lower response size to protect egress, valid requests with small responses can start failing during input buffering. Use a separate request-body limit, or make the shared limit explicit in config/docs.

🛠️ Proposed direction
-    byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize());
+    byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());

Also applies to: 100-101

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
at line 89, The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 6 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:104">
P1: Request timeout enforcement was removed; JSON-RPC calls can now run unbounded and tie up servlet threads.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

♻️ Duplicate comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)

33-35: ⚠️ Potential issue | 🟠 Major

Restore bounded, JSON-RPC-mapped request parsing.

readBody still buffers the entire request with no cap before batch validation, so a large body can exhaust heap. Also, MAPPER.readTree runs outside the try, so malformed JSON still escapes as a servlet error instead of a JSON-RPC parse error. Use a dedicated request-body limit and map parse failures to -32700. This is the same risk previously flagged around pre-parse handling and request-size limiting.

Suggested direction
+import com.fasterxml.jackson.core.JsonProcessingException;
 ...
   enum JsonRpcError {
+    PARSE_ERROR(-32700),
     EXCEED_LIMIT(-32005),
     RESPONSE_TOO_LARGE(-32003);
-    byte[] body = readBody(req.getInputStream());
-
-    JsonNode rootNode = MAPPER.readTree(body);
+    byte[] body;
+    JsonNode rootNode;
+    try {
+      body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());
+      rootNode = MAPPER.readTree(body);
+    } catch (RequestBodyTooLargeException e) {
+      writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, e.getMessage(), null);
+      return;
+    } catch (JsonProcessingException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
+      return;
+    }
-  private byte[] readBody(InputStream in) throws IOException {
+  private byte[] readBody(InputStream in, int maxBytes) throws IOException {
     ByteArrayOutputStream buffer = new ByteArrayOutputStream();
     byte[] tmp = new byte[4096];
     int n;
+    int total = 0;
     while ((n = in.read(tmp)) != -1) {
+      total += n;
+      if (maxBytes > 0 && total > maxBytes) {
+        throw new RequestBodyTooLargeException(
+            "Request body exceeds the limit of " + maxBytes);
+      }
       buffer.write(tmp, 0, n);
     }
     return buffer.toByteArray();
   }

Please verify that a separate jsonRpcMaxRequestSize / getJsonRpcMaxRequestSize() is added to config and parameter loading rather than reusing jsonRpcMaxResponseSize.

#!/bin/bash
# Verify JSON-RPC request-size config exists and readBody is bounded.
rg -n -C3 'jsonRpcMax(Request|Response)Size|getJsonRpcMax(Request|Response)Size|readBody\s*\('

Also applies to: 89-91, 116-124

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 33 - 35, Add a dedicated request-size cap and map JSON parse
failures to the JSON-RPC parse error (-32700): introduce a config getter
getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound
reading the request (prevent full buffering), throw or return Json-RPC error
code -32700 on MAPPER.readTree failures by wrapping the parse call in the
try/catch that yields the JSON-RPC error response, and ensure exceed-limit
conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate;
apply the same changes to the other readBody occurrences noted (lines ~89-91,
116-124).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 33-35: Add a dedicated request-size cap and map JSON parse
failures to the JSON-RPC parse error (-32700): introduce a config getter
getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound
reading the request (prevent full buffering), throw or return Json-RPC error
code -32700 on MAPPER.readTree failures by wrapping the parse call in the
try/catch that yields the JSON-RPC error response, and ensure exceed-limit
conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate;
apply the same changes to the other readBody occurrences noted (lines ~89-91,
116-124).

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ee275e1b-1c36-4cb8-82d2-17c535ecf953

📥 Commits

Reviewing files that changed from the base of the PR and between 2dad857 and bac4886.

📒 Files selected for processing (1)
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java

Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 1 file (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:94">
P2: Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.

JsonNode rootNode;
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: Handle empty-body parsing explicitly: readTree may return null, which can trigger a NullPointerException on the next rootNode access.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java, line 94:

<comment>Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</comment>

<file context>
@@ -86,9 +87,15 @@ public Integer getJsonRpcCode(int httpStatusCode) {
+    JsonNode rootNode;
+    try {
+      body = readBody(req.getInputStream());
+      rootNode = MAPPER.readTree(body);
+    } catch (IOException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
</file context>
Suggested change
rootNode = MAPPER.readTree(body);
rootNode = MAPPER.readTree(body);
if (rootNode == null) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
return;
}
Fix with Cubic

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)

110-121: The buffer wrapping design is sound; no collision risk exists — but add a comment explaining the batch trade-off.

The BufferedResponseWrapper contract is correctly implemented: it strictly buffers the response body (to a ByteArrayOutputStream) and only stores header mutations (setStatus, setContentType) locally until commitToResponse(). When JsonRpcResponseTooLargeException is thrown, writeJsonRpcError(resp, ...) writes directly to the unwrapped resp and returns immediately — bufferedResp.commitToResponse() is never called, so there is no collision. The underlying response is still clean (not committed) at that point, making the write safe.

However, for a batched request that overflows, the code intentionally returns a single top-level error with id=null, discarding any partial per-call results already produced. While this is JSON-RPC spec-legal, a comment acknowledging this trade-off (e.g., on line 114–115) would help future maintainers understand the design choice.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 110 - 121, Add a clarifying comment in JsonRpcServlet near the
JsonRpcResponseTooLargeException catch (around where rpcServer.handle,
bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the
buffering contract and the intentional trade-off for batched requests:
BufferedResponseWrapper only buffers body and header mutations until
commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the
handler writes a single top-level error (id=null) directly to the unwrapped resp
and returns without calling bufferedResp.commitToResponse(), discarding any
partial per-call results produced for the batch; this documents why a single
error is returned instead of per-call partials.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 92-98: The current try/catch in JsonRpcServlet conflates transport
IO failures from readBody and JSON parsing failures from MAPPER.readTree into a
single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.
- Around line 87-131: The readBody(InputStream in) currently reads the entire
request into memory with no cap, enabling an OOM DoS; change it to enforce a
configured request-size cap (add or use
CommonParameter.getJsonRpcMaxRequestSize(), falling back to
getJsonRpcMaxResponseSize() if missing) inside readBody so it stops reading and
signals an error when the limit is exceeded; have doPost catch that condition
and call writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, "Request body
exceeds limit", idNode) (distinct from PARSE_ERROR), and ensure the new logic
references readBody, doPost, CommonParameter, and JsonRpcError.EXCEED_LIMIT so
oversized requests are rejected before buffering the full body.

In `@framework/src/main/resources/config.conf`:
- Around line 380-383: The current behavior rejects batches and responses when
maxBatchSize/maxResponseSize are 0/negative; change checks to treat <= 0 as "no
limit": in JsonRpcServlet.doPost adjust the batch check to enforce only when
parameter.getJsonRpcMaxBatchSize() > 0 (e.g. if
(parameter.getJsonRpcMaxBatchSize() > 0 && rootNode.size() >
parameter.getJsonRpcMaxBatchSize()) ...), and update BufferedResponseWrapper (or
the code that passes maxResponseSize) to only cap/validate response bytes when
maxResponseSize > 0 so 0/negative means unlimited; ensure references to
maxBatchSize and maxResponseSize in parameter and BufferedResponseWrapper follow
this rule.

---

Nitpick comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 110-121: Add a clarifying comment in JsonRpcServlet near the
JsonRpcResponseTooLargeException catch (around where rpcServer.handle,
bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the
buffering contract and the intentional trade-off for batched requests:
BufferedResponseWrapper only buffers body and header mutations until
commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the
handler writes a single top-level error (id=null) directly to the unwrapped resp
and returns without calling bufferedResp.commitToResponse(), discarding any
partial per-call results produced for the batch; this documents why a single
error is returned instead of per-call partials.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f1824724-0ab2-4c30-b121-d0ea656e8945

📥 Commits

Reviewing files that changed from the base of the PR and between bac4886 and 32618c5.

📒 Files selected for processing (5)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/main/resources/config.conf
🚧 Files skipped from review as they are similar to previous changes (2)
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java

Comment on lines +92 to +98
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
} catch (IOException e) {
writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
return;
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Collapsing IOException and JSON parse errors into a single PARSE_ERROR is misleading and hides real failures.

readBody can throw IOException for legitimate transport problems (client aborts, socket reset, read timeout) — none of those are "Parse json error", yet clients would see -32700. MAPPER.readTree throws JsonProcessingException (an IOException subclass), which is the only case that should map to -32700. Split the handling so each failure mode reports an accurate code/message, and consider letting true transport IOExceptions propagate (the container will log them appropriately) rather than masking them as a JSON-RPC parse error. The diff in the critical comment above already splits these; the key point is: don’t let a network blip look like malformed JSON.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 92 - 98, The current try/catch in JsonRpcServlet conflates
transport IO failures from readBody and JSON parsing failures from
MAPPER.readTree into a single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.

Comment thread framework/src/main/resources/config.conf Outdated
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

1 issue found across 2 files (changes from recent commits).

Prompt for AI agents (unresolved issues)

Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.


<file name="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java">

<violation number="1" location="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java:88">
P2: `getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</violation>
</file>

Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.


@Override
public PrintWriter getWriter() {
return new PrintWriter(outputStream, true);
Copy link
Copy Markdown

@cubic-dev-ai cubic-dev-ai Bot Apr 23, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2: getWriter() uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.

Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java, line 88:

<comment>`getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</comment>

<file context>
@@ -82,6 +83,11 @@ public ServletOutputStream getOutputStream() {
 
+  @Override
+  public PrintWriter getWriter() {
+    return new PrintWriter(outputStream, true);
+  }
+
</file context>
Fix with Cubic

writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
return;
}
if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[MUST] maxBatchSize=0 does not honor the "0 means no limit" contract

The batch-size guard at line 99 reads rootNode.size() > parameter.getJsonRpcMaxBatchSize() without a > 0 precondition. The PR description and the sibling keys maxResponseSize (BufferedResponseWrapper.checkLimit) and maxAddressSize (LogFilter) all state that 0 disables the limit; this branch does the opposite — configuring maxBatchSize = 0 rejects every non-empty batch with -32005. That is a silent operator trap.

Suggestion: add a max > 0 && guard so the three limits follow the same semantics.

writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE, e.getMessage(), idNode);
return;
} catch (Exception e) {
throw new IOException("RPC execution failed", e);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Wrapping every handler exception as IOException escapes as HTTP 500

catch (Exception e) throw new IOException("RPC execution failed", e) rethrows all non-JsonRpcResponseTooLargeException runtime errors. The parent RateLimiterServlet.service only catches ServletException | IOException and re-throws, so the servlet container will emit an HTTP 500 with no JSON-RPC body. jsonrpc4j's ErrorResolver would normally have produced a structured error response on HTTP 200 — that contract is now lost, and web3.js/ethers-style clients will interpret 500 as a transport failure and retry.

Suggestion: either drop this catch (let the original IOException path from rpcServer.handle propagate) or log the root cause and emit a -32603 Internal error through writeJsonRpcError so the HTTP 200 + JSON-RPC error contract is preserved.

JsonNode rootNode;
try {
body = readBody(req.getInputStream());
rootNode = MAPPER.readTree(body);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Empty or whitespace-only body makes readTree return null and NPE on the next line

MAPPER.readTree(body) returns null (not a MissingNode) for a zero-length input, so the subsequent rootNode.isArray() at line 99 throws NullPointerException. The NPE is not caught by the IOException clause at line 95; it bubbles out as an unchecked exception, gets wrapped by the catch at line 116, and the client sees HTTP 500 instead of the structured -32700 Parse error that the parse path was meant to return.

Suggestion: treat rootNode == null || rootNode.isMissingNode() as a parse error and return -32700 via writeJsonRpcError.

if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
"Batch size " + rootNode.size() + " exceeds the limit of "
+ parameter.getJsonRpcMaxBatchSize(), null);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Batch-size rejection returns a single error object, not a batch array

A JSON-RPC 2.0 batch request must be answered with a JSON array of responses. When rootNode.isArray() is true and the size exceeds maxBatchSize, the current code returns a single-object error response, which breaks the batch contract — standard clients (web3j, web3.js, ethers) will fail to parse the response as a batch and either error out or retry the whole batch.

Suggestion: when rejecting an over-size batch, wrap the -32005 error object in a JSON array of length 1 (id=null), or document this deviation prominently in config.conf and the PR description.

Copy link
Copy Markdown
Owner Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If batch request exceeds the maxBatchSize, it should returns a single-object error response, Suppose if the batch size is over 100000, should I answer with 10000-size array ?

}
}

@Override
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getStatus() and setHeader("Content-Length", ...) are not intercepted

Header capture only covers setStatus, setContentType, setContentLength(int|long). Two gaps:

  1. getStatus() is not overridden, so callers (logging filter, metrics) read the underlying response status, not the this.status field — they will see SC_OK until commitToResponse() is called.
  2. setHeader("Content-Length", ...) / addHeader(...) pass straight through to the underlying response. jsonrpc4j currently calls setContentLength, but any downstream filter or library upgrade using setHeader would write a Content-Length to the real response before the size check runs.

Suggestion: override getStatus() to return this.status, and intercept setHeader / addHeader for the Content-Length name (case-insensitive).

@@ -0,0 +1,17 @@
package org.tron.core.exception.jsonrpc;

public class JsonRpcResponseTooLargeException extends RuntimeException {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] New exception breaks the project's JsonRpcException family

Every other class in common/src/main/java/org/tron/core/exception/jsonrpc/ extends JsonRpcException (which in turn extends TronException and reports itself). This new class extends RuntimeException instead, so it cannot be wired into @JsonRpcError annotations or the shared JsonRpcErrorResolver if future methods need to surface response-too-large through the normal error path.

Suggestion: extend JsonRpcException, or document in a Javadoc why an unchecked non-TronException base was chosen (e.g. to avoid forcing the wrapper signature to throws).

}

@Override
public ServletInputStream getInputStream() {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] getInputStream and getReader are not mutually exclusive

Servlet spec requires that once one of getInputStream() / getReader() has been used on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array for both methods, allowing arbitrary interleaving. jsonrpc4j only calls one today, so the bug is latent — but any future filter that reads the body through the other accessor would silently double-read.

Suggestion: track which accessor was used first and throw IllegalStateException on the second, as the spec requires.

public int jsonRpcMaxBatchSize = 100;
@Getter
@Setter
public int jsonRpcMaxResponseSize = 25 * 1024 * 1024;
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[SHOULD] Size-limit fields use int; no range validation on config load

jsonRpcMaxResponseSize and the peer maxBytes parameter in BufferedResponseWrapper are int, capped at 2 GiB. The current 25 MiB default is fine, but reading via config.getInt(...) with no range validation means a negative config value silently becomes a permanent "no limit" state (thanks to the maxBytes > 0 guard) and a value near Integer.MAX_VALUE will put ByteArrayOutputStream near its addressable limit.

Suggestion: either switch the field (and wrapper param) to long plus validate >= 0 during Args.applyConfigParams, or reject out-of-range values with a clear error at startup.

* only forwarded to the real response via {@link #commitToResponse()}, preventing a timed-out
* handler thread from racing with the timeout error writer.
*/
public class BufferedResponseWrapper extends HttpServletResponseWrapper {
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[DISCUSS] Consider Jetty SizeLimitHandler for the response-size cap

Scoped to the response-size concern only — the batch-size and address-size checks must stay at the JSON-RPC layer and are out of scope for this discussion.

The response-size cap this class implements can also be enforced one layer lower. Jetty 9.4 (project is on 9.4.57 per framework/build.gradle:45) ships org.eclipse.jetty.server.handler.SizeLimitHandler, which wraps the output stream, counts bytes on the fly, and aborts the response when the configured cap is reached. Wrapping the ServletContextHandler in FullNodeJsonRpcHttpService.addServlet (line 30) with it would:

  • Drop the full in-memory buffer. Peak memory per connection falls from ~maxResponseSize (the ByteArrayOutputStream in checkLimit) to the Jetty output buffer (~8 KiB), i.e. true streaming instead of buffer-then-commit.
  • Remove this wrapper and its JsonRpcResponseTooLargeException plumbing — the same attacker model (oversized response → bounded memory) is still covered.

Tradeoff: SizeLimitHandler aborts with HttpException.RuntimeException → HTTP 413 plain-text, not a JSON-RPC -32003 structured error. Clients like web3j / ethers would see a transport-level failure rather than a parseable JSON-RPC error object. If that structured-error UX is a product requirement (for example so clients can auto-fallback to a smaller range), keep the current wrapper. If the cap is intended as a safety net where 413 is acceptable, SizeLimitHandler is simpler and more memory-efficient.

No change requested — just worth deciding explicitly which tradeoff this PR is making.

Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

♻️ Duplicate comments (3)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (3)

109-123: ⚠️ Potential issue | 🟠 Major

Wrapping every handler exception as IOException and ignoring overflow for batch requests breaks the JSON-RPC contract.

Two issues in the dispatch/overflow path:

  1. catch (Exception e) throw new IOException("RPC execution failed", e); (Lines 111–113) converts every non-IOException runtime failure inside rpcServer.handle(...) into a servlet IOException. The container will surface that as HTTP 500 with no JSON-RPC body, so web3.js / ethers / web3j clients see a transport failure instead of jsonrpc4j's structured error response (which would otherwise be HTTP 200 + a JSON-RPC error object via JsonRpcErrorResolver). Either drop this catch and let rpcServer.handle propagate, or log and emit a -32603 Internal error through writeJsonRpcError to preserve the HTTP-200 + JSON-RPC-error contract.

  2. On overflow for a batch request, Line 116 sets idNode = null and writeJsonRpcError writes a single error object. JSON-RPC 2.0 mandates that the response to a non-empty batch be a JSON array of responses; a bare object will fail to parse on standard batch-aware clients. The same shape problem exists for the batch-rejection path on Lines 99–101. At minimum, when the request was a batch, wrap the error object in a one-element array, or document this deviation explicitly in config.conf.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 109 - 123, The catch in JsonRpcServlet around rpcServer.handle(...)
currently converts all exceptions to IOException which yields HTTP 500 and no
JSON-RPC body; remove that catch or replace it with logic that logs the
exception and calls writeJsonRpcError(..., JsonRpcError.INTERNAL_ERROR, ...,
idNode) so the servlet returns HTTP 200 with a JSON-RPC -32603 error (use the
same idNode determination as below). Also fix the overflow handling: when
bufferedResp.isOverflow() and rootNode.isArray() is true, wrap the single error
object emitted by writeJsonRpcError(...) in a one-element JSON array (or use a
new helper that emits an array response for batch requests) so batch requests
receive an array response; apply the same array-wrapping change to the earlier
batch-rejection path that currently returns a bare object.

89-97: ⚠️ Potential issue | 🔴 Critical

Critical: request body is read with no size cap — DoS vector and direct contradiction of the PR goal.

readBody(req.getInputStream()) at Line 92 calls into the helper at Lines 125–133, which copies the entire request stream into a ByteArrayOutputStream with no limit. A client sending a large Content-Length (or chunked body) will force the servlet to allocate the full payload in heap before any of the new validations (batch size, address size, response size) get a chance to run, so the very thing this PR is meant to mitigate is bypassed at the entry point.

Additionally, the catch on Line 94 maps every IOException to PARSE_ERROR (-32700). Transport-level failures (client abort, socket reset, read timeout) are not parse errors and shouldn't be reported as such — and once a request-size cap is added, oversized requests should map to EXCEED_LIMIT (-32005), not -32700.

🔒️ Suggested direction
-    byte[] body;
-    JsonNode rootNode;
-    try {
-      body = readBody(req.getInputStream());
-      rootNode = MAPPER.readTree(body);
-    } catch (IOException e) {
-      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
-      return;
-    }
+    byte[] body;
+    try {
+      body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());
+    } catch (RequestTooLargeException e) {
+      writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
+          "Request body exceeds the limit of " + parameter.getJsonRpcMaxRequestSize(), null);
+      return;
+    }
+    JsonNode rootNode;
+    try {
+      rootNode = MAPPER.readTree(body);
+    } catch (JsonProcessingException e) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
+      return;
+    }
@@
-  private byte[] readBody(InputStream in) throws IOException {
+  private byte[] readBody(InputStream in, int maxSize) throws IOException {
     ByteArrayOutputStream buffer = new ByteArrayOutputStream();
     byte[] tmp = new byte[4096];
-    int n;
-    while ((n = in.read(tmp)) != -1) {
+    int n;
+    int total = 0;
+    while ((n = in.read(tmp)) != -1) {
+      total += n;
+      if (maxSize > 0 && total > maxSize) {
+        throw new RequestTooLargeException("Request body exceeds " + maxSize + " bytes");
+      }
       buffer.write(tmp, 0, n);
     }
     return buffer.toByteArray();
   }

Also applies to: 125-133

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 89 - 97, The request body is currently read without any size cap
via readBody(InputStream) in JsonRpcServlet, enabling a DoS; modify the readBody
helper used by JsonRpcServlet to accept a maximum allowed bytes parameter and
enforce it while streaming (stop reading and throw a specific exception when the
limit is exceeded), then update the JsonRpcServlet request handling to catch
that size-exceeded exception and call writeJsonRpcError(resp,
JsonRpcError.EXCEED_LIMIT, "Request body exceeds allowed size", null); also
distinguish transport IO failures from parse errors by returning a different
error (e.g., map plain IOExceptions to a non-parse error or rethrow to be
handled as a transport error) so only actual JSON parse failures map to
JsonRpcError.PARSE_ERROR in the try/catch around MAPPER.readTree.

89-103: ⚠️ Potential issue | 🟠 Major

Empty body NPE and maxBatchSize=0 operator trap.

Two related correctness gaps in the parse/validate block:

  1. MAPPER.readTree(body) returns null for an empty/whitespace-only body, so Line 98's rootNode.isArray() will throw NullPointerException. The NPE escapes the IOException catch on Line 94, gets wrapped by the catch (Exception e) on Line 111, and the client receives an HTTP 500 instead of the structured -32700 Parse error this code path is meant to produce. Treat rootNode == null || rootNode.isMissingNode() the same as a parse failure.

  2. The batch guard on Line 98 is rootNode.size() > parameter.getJsonRpcMaxBatchSize() with no > 0 precondition. The PR description and the sibling limits (maxResponseSize in BufferedResponseWrapper.checkLimit, maxAddressSize in LogFilter) all document 0 as "no limit", but here maxBatchSize=0 rejects every non-empty batch with -32005. Add getJsonRpcMaxBatchSize() > 0 && so the three limits share the same semantics.

🛠️ Proposed fix
-    if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) {
+    if (rootNode == null || rootNode.isMissingNode()) {
+      writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null);
+      return;
+    }
+    int maxBatch = parameter.getJsonRpcMaxBatchSize();
+    if (rootNode.isArray() && maxBatch > 0 && rootNode.size() > maxBatch) {
       writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT,
-          "Batch size " + rootNode.size() + " exceeds the limit of "
-              + parameter.getJsonRpcMaxBatchSize(), null);
+          "Batch size " + rootNode.size() + " exceeds the limit of " + maxBatch, null);
       return;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 89 - 103, The parse/validate block in JsonRpcServlet currently
assumes MAPPER.readTree(body) returns a non-null node and treats
getJsonRpcMaxBatchSize()==0 as a strict limit; modify the logic in the try/catch
after calling readBody/readTree so that if rootNode is null or
rootNode.isMissingNode() you call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse error", null) and return (same behavior as on
IOException), and change the batch-size guard to only apply when
parameter.getJsonRpcMaxBatchSize() > 0 (i.e. use getJsonRpcMaxBatchSize() > 0 &&
rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()),
keeping the existing writeJsonRpcError(JsonRpcError.EXCEED_LIMIT, ...) and
return behavior for that case.
🧹 Nitpick comments (1)
framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java (1)

14-165: Add coverage for getWriter().

The wrapper's getWriter() path (added in this PR) is not exercised by any test. At minimum, please add cases that verify (a) writes via getWriter() are still subject to the same maxBytes overflow detection, (b) bytes written via the writer are flushed and committed by commitToResponse(), and (c) non-ASCII characters round-trip correctly under the response's character encoding. Without these, the encoding/caching concerns called out on the main class will silently regress.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`
around lines 14 - 165, Add JUnit tests that exercise
BufferedResponseWrapper.getWriter(): create wrappers with small maxBytes and
verify writes via getWriter() trigger isOverflow() the same way as
getOutputStream(); verify that text written through getWriter() is
flushed/committed when commitToResponse() is called by asserting
MockHttpServletResponse content and content length after commitToResponse(); and
add a test that sets a UTF-8 charset on the wrapper/response (e.g., via
setContentType or setCharacterEncoding), writes non-ASCII text via getWriter(),
calls commitToResponse(), and asserts the read-back bytes/string matches the
original to ensure correct encoding round-trip. Ensure tests reference
BufferedResponseWrapper.getWriter(), BufferedResponseWrapper.commitToResponse(),
BufferedResponseWrapper.isOverflow(), and MockHttpServletResponse content
accessors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 117-120: getWriter() currently creates a new PrintWriter using the
JVM default charset and doesn't respect the response encoding or the servlet
contract; change getWriter() in BufferedResponseWrapper to return a single
cached PrintWriter instance (create it lazily), construct it with an
OutputStreamWriter that uses the servlet response's character encoding (e.g.,
getCharacterEncoding() or default to UTF-8), ensure getWriter() throws
IllegalStateException if getOutputStream() was already called, and likewise make
getOutputStream() throw if the writer was already created; also ensure the
writer is flushed/closed properly (or its encoder flushed) when the wrapper is
committed so no buffered characters are dropped.

---

Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-123: The catch in JsonRpcServlet around rpcServer.handle(...)
currently converts all exceptions to IOException which yields HTTP 500 and no
JSON-RPC body; remove that catch or replace it with logic that logs the
exception and calls writeJsonRpcError(..., JsonRpcError.INTERNAL_ERROR, ...,
idNode) so the servlet returns HTTP 200 with a JSON-RPC -32603 error (use the
same idNode determination as below). Also fix the overflow handling: when
bufferedResp.isOverflow() and rootNode.isArray() is true, wrap the single error
object emitted by writeJsonRpcError(...) in a one-element JSON array (or use a
new helper that emits an array response for batch requests) so batch requests
receive an array response; apply the same array-wrapping change to the earlier
batch-rejection path that currently returns a bare object.
- Around line 89-97: The request body is currently read without any size cap via
readBody(InputStream) in JsonRpcServlet, enabling a DoS; modify the readBody
helper used by JsonRpcServlet to accept a maximum allowed bytes parameter and
enforce it while streaming (stop reading and throw a specific exception when the
limit is exceeded), then update the JsonRpcServlet request handling to catch
that size-exceeded exception and call writeJsonRpcError(resp,
JsonRpcError.EXCEED_LIMIT, "Request body exceeds allowed size", null); also
distinguish transport IO failures from parse errors by returning a different
error (e.g., map plain IOExceptions to a non-parse error or rethrow to be
handled as a transport error) so only actual JSON parse failures map to
JsonRpcError.PARSE_ERROR in the try/catch around MAPPER.readTree.
- Around line 89-103: The parse/validate block in JsonRpcServlet currently
assumes MAPPER.readTree(body) returns a non-null node and treats
getJsonRpcMaxBatchSize()==0 as a strict limit; modify the logic in the try/catch
after calling readBody/readTree so that if rootNode is null or
rootNode.isMissingNode() you call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse error", null) and return (same behavior as on
IOException), and change the batch-size guard to only apply when
parameter.getJsonRpcMaxBatchSize() > 0 (i.e. use getJsonRpcMaxBatchSize() > 0 &&
rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()),
keeping the existing writeJsonRpcError(JsonRpcError.EXCEED_LIMIT, ...) and
return behavior for that case.

---

Nitpick comments:
In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`:
- Around line 14-165: Add JUnit tests that exercise
BufferedResponseWrapper.getWriter(): create wrappers with small maxBytes and
verify writes via getWriter() trigger isOverflow() the same way as
getOutputStream(); verify that text written through getWriter() is
flushed/committed when commitToResponse() is called by asserting
MockHttpServletResponse content and content length after commitToResponse(); and
add a test that sets a UTF-8 charset on the wrapper/response (e.g., via
setContentType or setCharacterEncoding), writes non-ASCII text via getWriter(),
calls commitToResponse(), and asserts the read-back bytes/string matches the
original to ensure correct encoding round-trip. Ensure tests reference
BufferedResponseWrapper.getWriter(), BufferedResponseWrapper.commitToResponse(),
BufferedResponseWrapper.isOverflow(), and MockHttpServletResponse content
accessors.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 18ece4a9-30cb-4831-82a4-0314be2d3298

📥 Commits

Reviewing files that changed from the base of the PR and between 32618c5 and 2dd6a60.

📒 Files selected for processing (3)
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java

Comment on lines +117 to +120
@Override
public PrintWriter getWriter() {
return new PrintWriter(outputStream, true);
}
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

Java PrintWriter(OutputStream, boolean autoFlush) constructor default charset

💡 Result:

For Java’s PrintWriter(OutputStream out, boolean autoFlush) constructor, the characters are encoded using the JVM’s default character encoding (i.e., the default charset of that JVM).

Citations:


🏁 Script executed:

cat -n framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java | head -140

Repository: 317787106/java-tron

Length of output: 4481


getWriter() uses JVM default charset and creates a new PrintWriter on every call.

PrintWriter(OutputStream, boolean) constructs the underlying OutputStreamWriter with Charset.defaultCharset(), not the response's character encoding. On hosts where the JVM default isn't UTF-8 (e.g., Windows Cp1252, some containerized JVMs), any non-ASCII bytes produced by jsonrpc4j via getWriter() will be mis-encoded and the JSON body will be corrupted. The Servlet specification also requires repeated getWriter() calls to return the same instance and to throw IllegalStateException if getOutputStream() was already used; both invariants are currently violated. Discarding a PrintWriter without flushing can also drop characters still buffered inside its internal StreamEncoder.

🛡️ Proposed fix
+import java.io.OutputStreamWriter;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
@@
   private String contentType;
+  private PrintWriter writer;
+  private boolean outputStreamUsed;
   `@Getter`
   private boolean overflow = false;
@@
   `@Override`
   public ServletOutputStream getOutputStream() {
+    if (writer != null) {
+      throw new IllegalStateException("getWriter() has already been called");
+    }
+    outputStreamUsed = true;
     return outputStream;
   }
 
   `@Override`
   public PrintWriter getWriter() {
-    return new PrintWriter(outputStream, true);
+    if (outputStreamUsed) {
+      throw new IllegalStateException("getOutputStream() has already been called");
+    }
+    if (writer == null) {
+      String enc = getCharacterEncoding();
+      Charset cs = (enc == null) ? StandardCharsets.UTF_8 : Charset.forName(enc);
+      writer = new PrintWriter(new OutputStreamWriter(outputStream, cs), true);
+    }
+    return writer;
   }
 
   public void commitToResponse() throws IOException {
+    if (writer != null) {
+      writer.flush();
+    }
     if (contentType != null) {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`
around lines 117 - 120, getWriter() currently creates a new PrintWriter using
the JVM default charset and doesn't respect the response encoding or the servlet
contract; change getWriter() in BufferedResponseWrapper to return a single
cached PrintWriter instance (create it lazily), construct it with an
OutputStreamWriter that uses the servlet response's character encoding (e.g.,
getCharacterEncoding() or default to UTF-8), ensure getWriter() throws
IllegalStateException if getOutputStream() was already called, and likewise make
getOutputStream() throw if the writer was already created; also ensure the
writer is flushed/closed properly (or its encoder flushed) when the wrapper is
committed so no buffered characters are dropped.

@317787106 317787106 force-pushed the hotfix/restrict_jsonrpc_size branch from b0c8e8a to e70ea6b Compare April 28, 2026 14:33
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (5)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (3)

109-113: ⚠️ Potential issue | 🟠 Major

Re-throwing as IOException can bypass JSON-RPC structured errors.

Line 111–113 wraps handler exceptions into IOException, which can surface as servlet/container HTTP 500 rather than JSON-RPC error envelopes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 109 - 113, The current catch around rpcServer.handle(cachedReq,
bufferedResp) rethrows all exceptions as IOException, which can turn handler
errors into raw HTTP 500 responses instead of JSON-RPC error envelopes; change
the handler so that exceptions are caught and translated into a proper JSON-RPC
error written to bufferedResp (or let rpcServer produce the error envelope)
rather than throwing IOException — i.e., in the catch block use rpcServer or a
utility to build a JSON-RPC error object for cachedReq and write it to
bufferedResp (and log the exception) so clients receive a valid JSON-RPC error
response instead of an HTTP exception.

89-97: ⚠️ Potential issue | 🔴 Critical

Request body is still read unbounded into memory.

Line 92 + Lines 125–133 buffer the entire request with no cap. This keeps an OOM/heap-pressure DoS path open before validation. Also, Line 94 maps all IOException to parse error, which can misclassify transport failures.

Also applies to: 125-133

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 89 - 97, The code currently reads the entire request into memory
via readBody and then passes the full byte[] to MAPPER.readTree, which allows
unbounded allocation and also maps all IOExceptions to JsonRpcError.PARSE_ERROR;
update JsonRpcServlet to limit and validate request size by replacing the
unbounded readBody usage with a capped read (or InputStream wrapper with a
maxBytes threshold) or use Jackson's streaming parser directly on the request
InputStream to enforce a maximum allowed body size, return a size-related error
(e.g., WRITE a JsonRpcError indicating payload too large) when the cap is
exceeded, and distinguish IO exceptions caused by transport failures from JSON
parse errors by catching and mapping JsonProcessingException / Jackson parsing
exceptions to JsonRpcError.PARSE_ERROR while mapping plain IOExceptions to an
appropriate transport/IO error response via writeJsonRpcError (references:
JsonRpcServlet, readBody, MAPPER.readTree, writeJsonRpcError,
JsonRpcError.PARSE_ERROR).

98-102: ⚠️ Potential issue | 🟠 Major

Batch limit handling still violates two contracts.

Line 98 lacks a maxBatchSize > 0 guard, so maxBatchSize=0 rejects every non-empty batch. Also, for an oversized batch, Line 99–101 returns a single error object instead of a batch-array response shape.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 98 - 102, The batch-limit check should only apply when
parameter.getJsonRpcMaxBatchSize() > 0 and oversized-batch responses must be a
JSON-RPC batch (an array), not a single error object. Change the if condition
that uses rootNode.isArray() to also require parameter.getJsonRpcMaxBatchSize()
> 0, and when the batch is too large call a routine that writes a batch-shaped
error (an array containing the JsonRpcError.EXCEED_LIMIT object) instead of
writeJsonRpcError's single-object response; you can either add a new helper
(e.g., writeJsonRpcErrorBatch) or extend writeJsonRpcError to accept a flag to
wrap the error in an array — reference rootNode,
parameter.getJsonRpcMaxBatchSize(), writeJsonRpcError, and
JsonRpcError.EXCEED_LIMIT to locate the change.
framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java (2)

69-71: ⚠️ Potential issue | 🟠 Major

Writer/stream handling still breaks servlet contract and can mis-encode output.

Line 69 uses JVM-default charset (PrintWriter(OutputStream, ...)), Line 115/120 allow both writer and stream paths, and Line 124–131 doesn’t flush writer before commit. This can produce corrupted or truncated JSON.

Suggested fix
 import java.io.ByteArrayOutputStream;
 import java.io.IOException;
+import java.io.OutputStreamWriter;
 import java.io.PrintWriter;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
@@
-  private final PrintWriter writer = new PrintWriter(outputStream, true);
+  private PrintWriter writer;
+  private boolean outputStreamUsed;
@@
   `@Override`
   public ServletOutputStream getOutputStream() {
+    if (writer != null) {
+      throw new IllegalStateException("getWriter() has already been called");
+    }
+    outputStreamUsed = true;
     return outputStream;
   }
@@
   public PrintWriter getWriter() {
-    return writer;
+    if (outputStreamUsed) {
+      throw new IllegalStateException("getOutputStream() has already been called");
+    }
+    if (writer == null) {
+      String enc = getCharacterEncoding();
+      Charset cs = (enc == null) ? StandardCharsets.UTF_8 : Charset.forName(enc);
+      writer = new PrintWriter(new OutputStreamWriter(outputStream, cs), true);
+    }
+    return writer;
   }
@@
   public void commitToResponse() throws IOException {
+    if (writer != null) {
+      writer.flush();
+    }
     if (contentType != null) {
       actual.setContentType(contentType);
     }

Also applies to: 114-122, 124-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`
around lines 69 - 71, BufferedResponseWrapper currently constructs a PrintWriter
with the platform default charset and allows both writer and stream paths to be
used and doesn't flush the writer before commit, causing mis-encoded or
truncated output; fix by replacing the eager PrintWriter creation with a
lazily-initialized Writer that wraps the existing outputStream using an explicit
charset (e.g. UTF-8) via an OutputStreamWriter, enforce mutual exclusivity
between getWriter() and getOutputStream() (throw IllegalStateException if the
other has been obtained) in the methods that expose them, and ensure commit()
(or the method that finalizes response) flushes and closes the writer if
non-null before writing/committing the outputStream so all bytes are correctly
encoded and flushed.

29-31: ⚠️ Potential issue | 🟡 Minor

Status/header buffering is incomplete.

Line 29 tracks status internally, but there is no getStatus() override. Also, setHeader/addHeader("Content-Length", ...) are not intercepted, so downstream code can leak Content-Length to the real response before wrapper commit.

Also applies to: 90-102, 124-131

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`
around lines 29 - 31, BufferedResponseWrapper currently tracks status in the
private field status but does not override getStatus(), and it doesn't intercept
header methods so downstream code can write Content-Length to the real response
prematurely; update BufferedResponseWrapper to override getStatus() to return
the internal status field and override all mutable status APIs (setStatus(int),
setStatus(int,String), sendError(...)) to update that field; also override
header APIs that can convey content length (setHeader(String,String),
addHeader(String,String), setIntHeader(String,int), addIntHeader(String,int)) to
detect "Content-Length" (case-insensitive), store/ignore it in the wrapper's
buffered headers rather than delegating to the underlying response until commit,
and ensure commit writes the buffered Content-Length appropriately.
🧹 Nitpick comments (1)
framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java (1)

25-165: Please add writer-path contract tests to prevent regressions.

Current tests only exercise getOutputStream(). Add cases for getWriter() encoding/flush behavior and IllegalStateException exclusivity between getWriter() and getOutputStream().

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`
around lines 25 - 165, Add tests exercising the writer path for
BufferedResponseWrapper: create tests that call getWriter() to write UTF-8 text,
flush and/or close the writer, then call commitToResponse() and assert the Mock
response body bytes and content length match the written text and encoding; also
add tests that calling getWriter() after using getOutputStream() (and vice
versa) throws IllegalStateException to enforce exclusivity between getWriter()
and getOutputStream(). Target the same test class and use the existing mockResp
and methods like getWriter(), getOutputStream(), commitToResponse(),
isOverflow(), and getContentAsByteArray() to validate behavior and prevent
regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@common/src/main/resources/reference.conf`:
- Around line 403-411: Add the missing node.jsonrpc.maxRequestTimeout
configuration key alongside the other JSON-RPC guards (near maxBatchSize,
maxResponseSize, maxAddressSize); define a sensible default duration (for
example node.jsonrpc.maxRequestTimeout = 30s) so operators can override request
timeout behavior via config and the node.jsonrpc namespace is complete.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-110: The call to rpcServer.handle(cachedReq, bufferedResp) in
JsonRpcServlet must be executed with a timeout and cancellation to satisfy the
request-timeout objective; replace the direct synchronous call with submitting a
task (Runnable/Callable) to an ExecutorService or use CompletableFuture, call
get(timeout, unit) and on TimeoutException cancel the future, set bufferedResp
status/content to a JSON-RPC error object with code -32002 and an appropriate
message, and ensure any thrown exceptions are handled (InterruptedException,
ExecutionException) and the task is cancelled to free resources; reference
rpcServer.handle, cachedReq, bufferedResp and the JsonRpcServlet request
handling block when making this change.

---

Duplicate comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 69-71: BufferedResponseWrapper currently constructs a PrintWriter
with the platform default charset and allows both writer and stream paths to be
used and doesn't flush the writer before commit, causing mis-encoded or
truncated output; fix by replacing the eager PrintWriter creation with a
lazily-initialized Writer that wraps the existing outputStream using an explicit
charset (e.g. UTF-8) via an OutputStreamWriter, enforce mutual exclusivity
between getWriter() and getOutputStream() (throw IllegalStateException if the
other has been obtained) in the methods that expose them, and ensure commit()
(or the method that finalizes response) flushes and closes the writer if
non-null before writing/committing the outputStream so all bytes are correctly
encoded and flushed.
- Around line 29-31: BufferedResponseWrapper currently tracks status in the
private field status but does not override getStatus(), and it doesn't intercept
header methods so downstream code can write Content-Length to the real response
prematurely; update BufferedResponseWrapper to override getStatus() to return
the internal status field and override all mutable status APIs (setStatus(int),
setStatus(int,String), sendError(...)) to update that field; also override
header APIs that can convey content length (setHeader(String,String),
addHeader(String,String), setIntHeader(String,int), addIntHeader(String,int)) to
detect "Content-Length" (case-insensitive), store/ignore it in the wrapper's
buffered headers rather than delegating to the underlying response until commit,
and ensure commit writes the buffered Content-Length appropriately.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-113: The current catch around rpcServer.handle(cachedReq,
bufferedResp) rethrows all exceptions as IOException, which can turn handler
errors into raw HTTP 500 responses instead of JSON-RPC error envelopes; change
the handler so that exceptions are caught and translated into a proper JSON-RPC
error written to bufferedResp (or let rpcServer produce the error envelope)
rather than throwing IOException — i.e., in the catch block use rpcServer or a
utility to build a JSON-RPC error object for cachedReq and write it to
bufferedResp (and log the exception) so clients receive a valid JSON-RPC error
response instead of an HTTP exception.
- Around line 89-97: The code currently reads the entire request into memory via
readBody and then passes the full byte[] to MAPPER.readTree, which allows
unbounded allocation and also maps all IOExceptions to JsonRpcError.PARSE_ERROR;
update JsonRpcServlet to limit and validate request size by replacing the
unbounded readBody usage with a capped read (or InputStream wrapper with a
maxBytes threshold) or use Jackson's streaming parser directly on the request
InputStream to enforce a maximum allowed body size, return a size-related error
(e.g., WRITE a JsonRpcError indicating payload too large) when the cap is
exceeded, and distinguish IO exceptions caused by transport failures from JSON
parse errors by catching and mapping JsonProcessingException / Jackson parsing
exceptions to JsonRpcError.PARSE_ERROR while mapping plain IOExceptions to an
appropriate transport/IO error response via writeJsonRpcError (references:
JsonRpcServlet, readBody, MAPPER.readTree, writeJsonRpcError,
JsonRpcError.PARSE_ERROR).
- Around line 98-102: The batch-limit check should only apply when
parameter.getJsonRpcMaxBatchSize() > 0 and oversized-batch responses must be a
JSON-RPC batch (an array), not a single error object. Change the if condition
that uses rootNode.isArray() to also require parameter.getJsonRpcMaxBatchSize()
> 0, and when the batch is too large call a routine that writes a batch-shaped
error (an array containing the JsonRpcError.EXCEED_LIMIT object) instead of
writeJsonRpcError's single-object response; you can either add a new helper
(e.g., writeJsonRpcErrorBatch) or extend writeJsonRpcError to accept a flag to
wrap the error in an array — reference rootNode,
parameter.getJsonRpcMaxBatchSize(), writeJsonRpcError, and
JsonRpcError.EXCEED_LIMIT to locate the change.

---

Nitpick comments:
In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`:
- Around line 25-165: Add tests exercising the writer path for
BufferedResponseWrapper: create tests that call getWriter() to write UTF-8 text,
flush and/or close the writer, then call commitToResponse() and assert the Mock
response body bytes and content length match the written text and encoding; also
add tests that calling getWriter() after using getOutputStream() (and vice
versa) throws IllegalStateException to enforce exclusivity between getWriter()
and getOutputStream(). Target the same test class and use the existing mockResp
and methods like getWriter(), getOutputStream(), commitToResponse(),
isOverflow(), and getContentAsByteArray() to validate behavior and prevent
regressions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: ef9899cb-90af-4c2b-aa17-169cb7facbde

📥 Commits

Reviewing files that changed from the base of the PR and between 2dd6a60 and e70ea6b.

📒 Files selected for processing (10)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • common/src/main/java/org/tron/core/config/args/NodeConfig.java
  • common/src/main/resources/reference.conf
  • framework/src/main/java/org/tron/core/config/args/Args.java
  • framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
  • framework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.java
  • framework/src/main/resources/config.conf
  • framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java
✅ Files skipped from review due to trivial changes (2)
  • framework/src/main/java/org/tron/core/config/args/Args.java
  • common/src/main/java/org/tron/core/config/args/NodeConfig.java
🚧 Files skipped from review as they are similar to previous changes (3)
  • common/src/main/java/org/tron/common/parameter/CommonParameter.java
  • framework/src/main/resources/config.conf
  • framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java

Comment on lines +403 to +411

# Maximum number of requests in a JSON-RPC batch, >0 otherwise no limit
maxBatchSize = 100

# Maximum response body size in bytes for JSON-RPC (default 25MB)
maxResponseSize = 26214400

# Maximum number of addresses in a single JSON-RPC request
maxAddressSize = 1000
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Missing maxRequestTimeout configuration key from the stated objective.

Line 403–411 adds the size-related guards, but the linked objective also requires configurable request timeout control. Without a node.jsonrpc.maxRequestTimeout default here, operators cannot enable/override timeout behavior consistently via config.

Suggested config addition
   jsonrpc {
@@
     # Maximum number of addresses in a single JSON-RPC request
     maxAddressSize = 1000
+
+    # Maximum execution time per JSON-RPC request in milliseconds, <=0 means no limit
+    maxRequestTimeout = 0
   }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@common/src/main/resources/reference.conf` around lines 403 - 411, Add the
missing node.jsonrpc.maxRequestTimeout configuration key alongside the other
JSON-RPC guards (near maxBatchSize, maxResponseSize, maxAddressSize); define a
sensible default duration (for example node.jsonrpc.maxRequestTimeout = 30s) so
operators can override request timeout behavior via config and the node.jsonrpc
namespace is complete.

Comment on lines +109 to +110
try {
rpcServer.handle(cachedReq, bufferedResp);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Request timeout objective is not enforced at execution point.

Line 109 executes rpcServer.handle(...) synchronously with no timeout boundary, but the linked objective requires request timeout control (-32002 on timeout).

I can propose a minimal timeout wrapper pattern (with cancellation + JSON-RPC -32002 mapping) if you want me to draft it.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 109 - 110, The call to rpcServer.handle(cachedReq, bufferedResp) in
JsonRpcServlet must be executed with a timeout and cancellation to satisfy the
request-timeout objective; replace the direct synchronous call with submitting a
task (Runnable/Callable) to an ExecutorService or use CompletableFuture, call
get(timeout, unit) and on TimeoutException cancel the future, set bufferedResp
status/content to a JSON-RPC error object with code -32002 and an appropriate
message, and ensure any thrown exceptions are handled (InterruptedException,
ExecutionException) and the task is cancelled to free resources; reference
rpcServer.handle, cachedReq, bufferedResp and the JsonRpcServlet request
handling block when making this change.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Feature] Introduce resource limits for JSON-RPC (batch size, response size, address size, timeout)

2 participants