feat(jsonrpc): add resource restrict for jsonrpc#69
feat(jsonrpc): add resource restrict for jsonrpc#69
Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughThis PR implements JSON-RPC resource constraints through configuration-driven limits: batch size validation during request parsing, incremental response size buffering with overflow detection, and address count validation in filter requests. Request and response wrapper classes enable non-destructive buffering and early rejection when limits are breached. Changes
Sequence DiagramsequenceDiagram
participant Client
participant JsonRpcServlet
participant ObjectMapper as JSON Parser
participant CachedBodyRequestWrapper as CachedBody<br/>Wrapper
participant RpcServer
participant BufferedResponseWrapper as BufferedResp<br/>Wrapper
participant HttpResponse
Client->>JsonRpcServlet: POST /rpc (JSON payload)
JsonRpcServlet->>JsonRpcServlet: Read request body
JsonRpcServlet->>ObjectMapper: Parse JSON
alt Parse Error
JsonRpcServlet->>Client: JSON-RPC error (-32700)
else Array Request
JsonRpcServlet->>JsonRpcServlet: Check batch size
alt Batch exceeds limit
JsonRpcServlet->>Client: JSON-RPC error (-32005)
else Within limit
JsonRpcServlet->>CachedBodyRequestWrapper: Wrap request with cached body
JsonRpcServlet->>BufferedResponseWrapper: Wrap response with max size limit
JsonRpcServlet->>RpcServer: handle(cachedReq, bufferedResp)
RpcServer->>BufferedResponseWrapper: Write response data
BufferedResponseWrapper->>BufferedResponseWrapper: Track bytes written
alt Response overflows limit
BufferedResponseWrapper->>BufferedResponseWrapper: Mark overflow=true
end
JsonRpcServlet->>BufferedResponseWrapper: commitToResponse()
alt Overflow detected
JsonRpcServlet->>Client: JSON-RPC error (-32003)
else Success
BufferedResponseWrapper->>HttpResponse: Write buffered response
HttpResponse->>Client: Response
end
end
end
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Possibly related issues
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 2❌ Failed checks (2 warnings)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
3 issues found across 9 files
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">
<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:129">
P2: Race condition after timeout: `future.cancel(true)` only sends an interrupt; the RPC handler may still be running. Since `BufferedResponseWrapper` doesn't override header methods (`setStatus`, `setContentType`, etc.), the handler thread writes headers directly to the underlying `resp` concurrently with `writeJsonRpcError`. This can corrupt the HTTP response. Consider either wrapping header methods in `BufferedResponseWrapper` too, or awaiting actual thread termination before writing the error.</violation>
<violation number="2" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:155">
P1: No limit on request body size. `readBody` loads the entire input stream into memory before any validation runs. A malicious client can send a multi-gigabyte body to cause OOM. Add a maximum request body size check (e.g., reject if total bytes read exceeds a configured limit).</violation>
<violation number="3" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:168">
P1: The `message` string is concatenated into the JSON response without JSON-escaping. If `message` contains `"`, `\`, or control characters, the output is malformed JSON. Use the `ObjectMapper` (already available as `MAPPER`) to build the JSON safely, or at minimum escape the string.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
Actionable comments posted: 4
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 80-105: BufferedResponseWrapper currently overrides
getOutputStream() but not getWriter(), so callers using getWriter() bypass the
buffered size-checked stream; override getWriter() in BufferedResponseWrapper to
return a PrintWriter that wraps an OutputStreamWriter which writes into the same
buffering/size-checking stream used by the existing outputStream field (honoring
actual.getCharacterEncoding() or UTF-8 fallback), ensure the writer delegates
flush/close correctly without directly committing the real response, and reuse
the same buffer/commitToResponse() path so output written via getWriter() is
included in commitToResponse().
In
`@framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java`:
- Around line 21-49: CachedBodyRequestWrapper currently only overrides
getInputStream() to replay the stored byte[] body, so callers using getReader()
will still read the original (consumed) request; add an override for public
BufferedReader getReader() in CachedBodyRequestWrapper that constructs an
InputStreamReader over a new ByteArrayInputStream(body) using
getCharacterEncoding() (falling back to StandardCharsets.UTF_8 if null) and
wraps it in a BufferedReader so all downstream code reading via getReader() gets
the cached body.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Line 89: The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).
- Around line 33-35: The pre-parse steps readBody(...) and MAPPER.readTree(...)
can throw servlet-level exceptions for oversized requests or malformed JSON;
update the JsonRpcError enum to add PARSE_ERROR(-32700) and wrap the calls to
readBody and MAPPER.readTree in a try-catch that converts size-limit violations
into JsonRpcError.EXCEED_LIMIT (-32005) responses and JSON parse failures into
JsonRpcError.PARSE_ERROR (-32700) responses (keeping existing
RESPONSETOO_LARGE/-32003 where appropriate), ensuring the servlet returns proper
JSON-RPC error objects instead of propagating exceptions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 920b17a2-c138-473b-b836-b7d711012802
📒 Files selected for processing (9)
common/src/main/java/org/tron/common/parameter/CommonParameter.javacommon/src/main/java/org/tron/core/exception/jsonrpc/JsonRpcResponseTooLargeException.javaframework/src/main/java/org/tron/core/config/args/Args.javaframework/src/main/java/org/tron/core/config/args/ConfigKey.javaframework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.javaframework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.javaframework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.javaframework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.javaframework/src/main/resources/config.conf
| rpcServer.handle(req, resp); | ||
| CommonParameter parameter = CommonParameter.getInstance(); | ||
|
|
||
| byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize()); |
There was a problem hiding this comment.
Don’t reuse jsonRpcMaxResponseSize as the request body cap.
Line 89 couples accepted request size to the response limit used on Lines 100-101. If operators lower response size to protect egress, valid requests with small responses can start failing during input buffering. Use a separate request-body limit, or make the shared limit explicit in config/docs.
🛠️ Proposed direction
- byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxResponseSize());
+ byte[] body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize());Also applies to: 100-101
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
at line 89, The request body is being capped with
parameter.getJsonRpcMaxResponseSize() in JsonRpcServlet.readBody call which
wrongly couples request-size to response-size; introduce or use a distinct
request-size limit (e.g., parameter.getJsonRpcMaxRequestSize() or a new config
field), change the readBody invocation in JsonRpcServlet to use that request
limit instead of getJsonRpcMaxResponseSize(), leave response-size uses
(getJsonRpcMaxResponseSize()) unchanged, and update configuration defaults/docs
and any parameter parsing to include the new request-size setting (with a
sensible fallback to avoid breaking deployments).
There was a problem hiding this comment.
1 issue found across 6 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">
<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:104">
P1: Request timeout enforcement was removed; JSON-RPC calls can now run unbounded and tie up servlet threads.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
There was a problem hiding this comment.
♻️ Duplicate comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)
33-35:⚠️ Potential issue | 🟠 MajorRestore bounded, JSON-RPC-mapped request parsing.
readBodystill buffers the entire request with no cap before batch validation, so a large body can exhaust heap. Also,MAPPER.readTreeruns outside thetry, so malformed JSON still escapes as a servlet error instead of a JSON-RPC parse error. Use a dedicated request-body limit and map parse failures to-32700. This is the same risk previously flagged around pre-parse handling and request-size limiting.Suggested direction
+import com.fasterxml.jackson.core.JsonProcessingException; ... enum JsonRpcError { + PARSE_ERROR(-32700), EXCEED_LIMIT(-32005), RESPONSE_TOO_LARGE(-32003);- byte[] body = readBody(req.getInputStream()); - - JsonNode rootNode = MAPPER.readTree(body); + byte[] body; + JsonNode rootNode; + try { + body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize()); + rootNode = MAPPER.readTree(body); + } catch (RequestBodyTooLargeException e) { + writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, e.getMessage(), null); + return; + } catch (JsonProcessingException e) { + writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); + return; + }- private byte[] readBody(InputStream in) throws IOException { + private byte[] readBody(InputStream in, int maxBytes) throws IOException { ByteArrayOutputStream buffer = new ByteArrayOutputStream(); byte[] tmp = new byte[4096]; int n; + int total = 0; while ((n = in.read(tmp)) != -1) { + total += n; + if (maxBytes > 0 && total > maxBytes) { + throw new RequestBodyTooLargeException( + "Request body exceeds the limit of " + maxBytes); + } buffer.write(tmp, 0, n); } return buffer.toByteArray(); }Please verify that a separate
jsonRpcMaxRequestSize/getJsonRpcMaxRequestSize()is added to config and parameter loading rather than reusingjsonRpcMaxResponseSize.#!/bin/bash # Verify JSON-RPC request-size config exists and readBody is bounded. rg -n -C3 'jsonRpcMax(Request|Response)Size|getJsonRpcMax(Request|Response)Size|readBody\s*\('Also applies to: 89-91, 116-124
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 33 - 35, Add a dedicated request-size cap and map JSON parse failures to the JSON-RPC parse error (-32700): introduce a config getter getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound reading the request (prevent full buffering), throw or return Json-RPC error code -32700 on MAPPER.readTree failures by wrapping the parse call in the try/catch that yields the JSON-RPC error response, and ensure exceed-limit conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate; apply the same changes to the other readBody occurrences noted (lines ~89-91, 116-124).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 33-35: Add a dedicated request-size cap and map JSON parse
failures to the JSON-RPC parse error (-32700): introduce a config getter
getJsonRpcMaxRequestSize()/jsonRpcMaxRequestSize and use it in readBody to bound
reading the request (prevent full buffering), throw or return Json-RPC error
code -32700 on MAPPER.readTree failures by wrapping the parse call in the
try/catch that yields the JSON-RPC error response, and ensure exceed-limit
conditions use JsonRpcError.EXCEED_LIMIT/RESPONSE_TOO_LARGE as appropriate;
apply the same changes to the other readBody occurrences noted (lines ~89-91,
116-124).
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ee275e1b-1c36-4cb8-82d2-17c535ecf953
📒 Files selected for processing (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java
There was a problem hiding this comment.
1 issue found across 1 file (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java">
<violation number="1" location="framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java:94">
P2: Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
| JsonNode rootNode; | ||
| try { | ||
| body = readBody(req.getInputStream()); | ||
| rootNode = MAPPER.readTree(body); |
There was a problem hiding this comment.
P2: Handle empty-body parsing explicitly: readTree may return null, which can trigger a NullPointerException on the next rootNode access.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java, line 94:
<comment>Handle empty-body parsing explicitly: `readTree` may return null, which can trigger a NullPointerException on the next `rootNode` access.</comment>
<file context>
@@ -86,9 +87,15 @@ public Integer getJsonRpcCode(int httpStatusCode) {
+ JsonNode rootNode;
+ try {
+ body = readBody(req.getInputStream());
+ rootNode = MAPPER.readTree(body);
+ } catch (IOException e) {
+ writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null);
</file context>
| rootNode = MAPPER.readTree(body); | |
| rootNode = MAPPER.readTree(body); | |
| if (rootNode == null) { | |
| writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null); | |
| return; | |
| } |
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (1)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (1)
110-121: The buffer wrapping design is sound; no collision risk exists — but add a comment explaining the batch trade-off.The
BufferedResponseWrappercontract is correctly implemented: it strictly buffers the response body (to aByteArrayOutputStream) and only stores header mutations (setStatus,setContentType) locally untilcommitToResponse(). WhenJsonRpcResponseTooLargeExceptionis thrown,writeJsonRpcError(resp, ...)writes directly to the unwrappedrespand returns immediately —bufferedResp.commitToResponse()is never called, so there is no collision. The underlying response is still clean (not committed) at that point, making the write safe.However, for a batched request that overflows, the code intentionally returns a single top-level error with
id=null, discarding any partial per-call results already produced. While this is JSON-RPC spec-legal, a comment acknowledging this trade-off (e.g., on line 114–115) would help future maintainers understand the design choice.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 110 - 121, Add a clarifying comment in JsonRpcServlet near the JsonRpcResponseTooLargeException catch (around where rpcServer.handle, bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the buffering contract and the intentional trade-off for batched requests: BufferedResponseWrapper only buffers body and header mutations until commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the handler writes a single top-level error (id=null) directly to the unwrapped resp and returns without calling bufferedResp.commitToResponse(), discarding any partial per-call results produced for the batch; this documents why a single error is returned instead of per-call partials.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 92-98: The current try/catch in JsonRpcServlet conflates transport
IO failures from readBody and JSON parsing failures from MAPPER.readTree into a
single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.
- Around line 87-131: The readBody(InputStream in) currently reads the entire
request into memory with no cap, enabling an OOM DoS; change it to enforce a
configured request-size cap (add or use
CommonParameter.getJsonRpcMaxRequestSize(), falling back to
getJsonRpcMaxResponseSize() if missing) inside readBody so it stops reading and
signals an error when the limit is exceeded; have doPost catch that condition
and call writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, "Request body
exceeds limit", idNode) (distinct from PARSE_ERROR), and ensure the new logic
references readBody, doPost, CommonParameter, and JsonRpcError.EXCEED_LIMIT so
oversized requests are rejected before buffering the full body.
In `@framework/src/main/resources/config.conf`:
- Around line 380-383: The current behavior rejects batches and responses when
maxBatchSize/maxResponseSize are 0/negative; change checks to treat <= 0 as "no
limit": in JsonRpcServlet.doPost adjust the batch check to enforce only when
parameter.getJsonRpcMaxBatchSize() > 0 (e.g. if
(parameter.getJsonRpcMaxBatchSize() > 0 && rootNode.size() >
parameter.getJsonRpcMaxBatchSize()) ...), and update BufferedResponseWrapper (or
the code that passes maxResponseSize) to only cap/validate response bytes when
maxResponseSize > 0 so 0/negative means unlimited; ensure references to
maxBatchSize and maxResponseSize in parameter and BufferedResponseWrapper follow
this rule.
---
Nitpick comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 110-121: Add a clarifying comment in JsonRpcServlet near the
JsonRpcResponseTooLargeException catch (around where rpcServer.handle,
bufferedResp.commitToResponse and writeJsonRpcError are used) that explains the
buffering contract and the intentional trade-off for batched requests:
BufferedResponseWrapper only buffers body and header mutations until
commitToResponse(), so when JsonRpcResponseTooLargeException is thrown the
handler writes a single top-level error (id=null) directly to the unwrapped resp
and returns without calling bufferedResp.commitToResponse(), discarding any
partial per-call results produced for the batch; this documents why a single
error is returned instead of per-call partials.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f1824724-0ab2-4c30-b121-d0ea656e8945
📒 Files selected for processing (5)
common/src/main/java/org/tron/common/parameter/CommonParameter.javaframework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.javaframework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.javaframework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.javaframework/src/main/resources/config.conf
🚧 Files skipped from review as they are similar to previous changes (2)
- framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
- framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java
| try { | ||
| body = readBody(req.getInputStream()); | ||
| rootNode = MAPPER.readTree(body); | ||
| } catch (IOException e) { | ||
| writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse json error", null); | ||
| return; | ||
| } |
There was a problem hiding this comment.
Collapsing IOException and JSON parse errors into a single PARSE_ERROR is misleading and hides real failures.
readBody can throw IOException for legitimate transport problems (client aborts, socket reset, read timeout) — none of those are "Parse json error", yet clients would see -32700. MAPPER.readTree throws JsonProcessingException (an IOException subclass), which is the only case that should map to -32700. Split the handling so each failure mode reports an accurate code/message, and consider letting true transport IOExceptions propagate (the container will log them appropriately) rather than masking them as a JSON-RPC parse error. The diff in the critical comment above already splits these; the key point is: don’t let a network blip look like malformed JSON.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 92 - 98, The current try/catch in JsonRpcServlet conflates
transport IO failures from readBody and JSON parsing failures from
MAPPER.readTree into a single JsonRpcError.PARSE_ERROR; change the code to call
readBody(req.getInputStream()) and MAPPER.readTree(body) in sequence and handle
failures separately: catch com.fasterxml.jackson.core.JsonProcessingException
(or JsonParseException) around MAPPER.readTree and call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse json error", null), but do not map plain
java.io.IOException from readBody to PARSE_ERROR—either let IOExceptions
propagate (so the servlet/container handles/logs them) or handle them as a
distinct transport/internal error and log appropriately; reference readBody,
MAPPER.readTree, writeJsonRpcError, and JsonRpcError.PARSE_ERROR when making the
change.
There was a problem hiding this comment.
1 issue found across 2 files (changes from recent commits).
Prompt for AI agents (unresolved issues)
Check if these issues are valid — if so, understand the root cause of each and fix them. If appropriate, use sub-agents to investigate and fix each issue separately.
<file name="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java">
<violation number="1" location="framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java:88">
P2: `getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</violation>
</file>
Reply with feedback, questions, or to request a fix. Tag @cubic-dev-ai to re-run a review.
|
|
||
| @Override | ||
| public PrintWriter getWriter() { | ||
| return new PrintWriter(outputStream, true); |
There was a problem hiding this comment.
P2: getWriter() uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.
Prompt for AI agents
Check if this issue is valid — if so, understand the root cause and fix it. At framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java, line 88:
<comment>`getWriter()` uses JVM default charset instead of the response charset, which can corrupt JSON-RPC responses containing non-ASCII characters.</comment>
<file context>
@@ -82,6 +83,11 @@ public ServletOutputStream getOutputStream() {
+ @Override
+ public PrintWriter getWriter() {
+ return new PrintWriter(outputStream, true);
+ }
+
</file context>
| writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); | ||
| return; | ||
| } | ||
| if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) { |
There was a problem hiding this comment.
[MUST] maxBatchSize=0 does not honor the "0 means no limit" contract
The batch-size guard at line 99 reads rootNode.size() > parameter.getJsonRpcMaxBatchSize() without a > 0 precondition. The PR description and the sibling keys maxResponseSize (BufferedResponseWrapper.checkLimit) and maxAddressSize (LogFilter) all state that 0 disables the limit; this branch does the opposite — configuring maxBatchSize = 0 rejects every non-empty batch with -32005. That is a silent operator trap.
Suggestion: add a max > 0 && guard so the three limits follow the same semantics.
| writeJsonRpcError(resp, JsonRpcError.RESPONSE_TOO_LARGE, e.getMessage(), idNode); | ||
| return; | ||
| } catch (Exception e) { | ||
| throw new IOException("RPC execution failed", e); |
There was a problem hiding this comment.
[SHOULD] Wrapping every handler exception as IOException escapes as HTTP 500
catch (Exception e) throw new IOException("RPC execution failed", e) rethrows all non-JsonRpcResponseTooLargeException runtime errors. The parent RateLimiterServlet.service only catches ServletException | IOException and re-throws, so the servlet container will emit an HTTP 500 with no JSON-RPC body. jsonrpc4j's ErrorResolver would normally have produced a structured error response on HTTP 200 — that contract is now lost, and web3.js/ethers-style clients will interpret 500 as a transport failure and retry.
Suggestion: either drop this catch (let the original IOException path from rpcServer.handle propagate) or log the root cause and emit a -32603 Internal error through writeJsonRpcError so the HTTP 200 + JSON-RPC error contract is preserved.
| JsonNode rootNode; | ||
| try { | ||
| body = readBody(req.getInputStream()); | ||
| rootNode = MAPPER.readTree(body); |
There was a problem hiding this comment.
[SHOULD] Empty or whitespace-only body makes readTree return null and NPE on the next line
MAPPER.readTree(body) returns null (not a MissingNode) for a zero-length input, so the subsequent rootNode.isArray() at line 99 throws NullPointerException. The NPE is not caught by the IOException clause at line 95; it bubbles out as an unchecked exception, gets wrapped by the catch at line 116, and the client sees HTTP 500 instead of the structured -32700 Parse error that the parse path was meant to return.
Suggestion: treat rootNode == null || rootNode.isMissingNode() as a parse error and return -32700 via writeJsonRpcError.
| if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) { | ||
| writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, | ||
| "Batch size " + rootNode.size() + " exceeds the limit of " | ||
| + parameter.getJsonRpcMaxBatchSize(), null); |
There was a problem hiding this comment.
[SHOULD] Batch-size rejection returns a single error object, not a batch array
A JSON-RPC 2.0 batch request must be answered with a JSON array of responses. When rootNode.isArray() is true and the size exceeds maxBatchSize, the current code returns a single-object error response, which breaks the batch contract — standard clients (web3j, web3.js, ethers) will fail to parse the response as a batch and either error out or retry the whole batch.
Suggestion: when rejecting an over-size batch, wrap the -32005 error object in a JSON array of length 1 (id=null), or document this deviation prominently in config.conf and the PR description.
There was a problem hiding this comment.
If batch request exceeds the maxBatchSize, it should returns a single-object error response, Suppose if the batch size is over 100000, should I answer with 10000-size array ?
| } | ||
| } | ||
|
|
||
| @Override |
There was a problem hiding this comment.
[SHOULD] getStatus() and setHeader("Content-Length", ...) are not intercepted
Header capture only covers setStatus, setContentType, setContentLength(int|long). Two gaps:
getStatus()is not overridden, so callers (logging filter, metrics) read the underlying response status, not thethis.statusfield — they will seeSC_OKuntilcommitToResponse()is called.setHeader("Content-Length", ...)/addHeader(...)pass straight through to the underlying response. jsonrpc4j currently callssetContentLength, but any downstream filter or library upgrade usingsetHeaderwould write a Content-Length to the real response before the size check runs.
Suggestion: override getStatus() to return this.status, and intercept setHeader / addHeader for the Content-Length name (case-insensitive).
| @@ -0,0 +1,17 @@ | |||
| package org.tron.core.exception.jsonrpc; | |||
|
|
|||
| public class JsonRpcResponseTooLargeException extends RuntimeException { | |||
There was a problem hiding this comment.
[SHOULD] New exception breaks the project's JsonRpcException family
Every other class in common/src/main/java/org/tron/core/exception/jsonrpc/ extends JsonRpcException (which in turn extends TronException and reports itself). This new class extends RuntimeException instead, so it cannot be wired into @JsonRpcError annotations or the shared JsonRpcErrorResolver if future methods need to surface response-too-large through the normal error path.
Suggestion: extend JsonRpcException, or document in a Javadoc why an unchecked non-TronException base was chosen (e.g. to avoid forcing the wrapper signature to throws).
| } | ||
|
|
||
| @Override | ||
| public ServletInputStream getInputStream() { |
There was a problem hiding this comment.
[SHOULD] getInputStream and getReader are not mutually exclusive
Servlet spec requires that once one of getInputStream() / getReader() has been used on a request, the other must throw IllegalStateException. This wrapper returns a fresh stream/reader from the cached byte array for both methods, allowing arbitrary interleaving. jsonrpc4j only calls one today, so the bug is latent — but any future filter that reads the body through the other accessor would silently double-read.
Suggestion: track which accessor was used first and throw IllegalStateException on the second, as the spec requires.
| public int jsonRpcMaxBatchSize = 100; | ||
| @Getter | ||
| @Setter | ||
| public int jsonRpcMaxResponseSize = 25 * 1024 * 1024; |
There was a problem hiding this comment.
[SHOULD] Size-limit fields use int; no range validation on config load
jsonRpcMaxResponseSize and the peer maxBytes parameter in BufferedResponseWrapper are int, capped at 2 GiB. The current 25 MiB default is fine, but reading via config.getInt(...) with no range validation means a negative config value silently becomes a permanent "no limit" state (thanks to the maxBytes > 0 guard) and a value near Integer.MAX_VALUE will put ByteArrayOutputStream near its addressable limit.
Suggestion: either switch the field (and wrapper param) to long plus validate >= 0 during Args.applyConfigParams, or reject out-of-range values with a clear error at startup.
| * only forwarded to the real response via {@link #commitToResponse()}, preventing a timed-out | ||
| * handler thread from racing with the timeout error writer. | ||
| */ | ||
| public class BufferedResponseWrapper extends HttpServletResponseWrapper { |
There was a problem hiding this comment.
[DISCUSS] Consider Jetty SizeLimitHandler for the response-size cap
Scoped to the response-size concern only — the batch-size and address-size checks must stay at the JSON-RPC layer and are out of scope for this discussion.
The response-size cap this class implements can also be enforced one layer lower. Jetty 9.4 (project is on 9.4.57 per framework/build.gradle:45) ships org.eclipse.jetty.server.handler.SizeLimitHandler, which wraps the output stream, counts bytes on the fly, and aborts the response when the configured cap is reached. Wrapping the ServletContextHandler in FullNodeJsonRpcHttpService.addServlet (line 30) with it would:
- Drop the full in-memory buffer. Peak memory per connection falls from ~
maxResponseSize(theByteArrayOutputStreamincheckLimit) to the Jetty output buffer (~8 KiB), i.e. true streaming instead of buffer-then-commit. - Remove this wrapper and its
JsonRpcResponseTooLargeExceptionplumbing — the same attacker model (oversized response → bounded memory) is still covered.
Tradeoff: SizeLimitHandler aborts with HttpException.RuntimeException → HTTP 413 plain-text, not a JSON-RPC -32003 structured error. Clients like web3j / ethers would see a transport-level failure rather than a parseable JSON-RPC error object. If that structured-error UX is a product requirement (for example so clients can auto-fallback to a smaller range), keep the current wrapper. If the cap is intended as a safety net where 413 is acceptable, SizeLimitHandler is simpler and more memory-efficient.
No change requested — just worth deciding explicitly which tradeoff this PR is making.
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (3)
109-123:⚠️ Potential issue | 🟠 MajorWrapping every handler exception as
IOExceptionand ignoring overflow for batch requests breaks the JSON-RPC contract.Two issues in the dispatch/overflow path:
catch (Exception e) throw new IOException("RPC execution failed", e);(Lines 111–113) converts every non-IOExceptionruntime failure insiderpcServer.handle(...)into a servletIOException. The container will surface that as HTTP 500 with no JSON-RPC body, so web3.js / ethers / web3j clients see a transport failure instead of jsonrpc4j's structured error response (which would otherwise be HTTP 200 + a JSON-RPC error object viaJsonRpcErrorResolver). Either drop this catch and letrpcServer.handlepropagate, or log and emit a-32603 Internal errorthroughwriteJsonRpcErrorto preserve the HTTP-200 + JSON-RPC-error contract.On overflow for a batch request, Line 116 sets
idNode = nullandwriteJsonRpcErrorwrites a single error object. JSON-RPC 2.0 mandates that the response to a non-empty batch be a JSON array of responses; a bare object will fail to parse on standard batch-aware clients. The same shape problem exists for the batch-rejection path on Lines 99–101. At minimum, when the request was a batch, wrap the error object in a one-element array, or document this deviation explicitly inconfig.conf.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 109 - 123, The catch in JsonRpcServlet around rpcServer.handle(...) currently converts all exceptions to IOException which yields HTTP 500 and no JSON-RPC body; remove that catch or replace it with logic that logs the exception and calls writeJsonRpcError(..., JsonRpcError.INTERNAL_ERROR, ..., idNode) so the servlet returns HTTP 200 with a JSON-RPC -32603 error (use the same idNode determination as below). Also fix the overflow handling: when bufferedResp.isOverflow() and rootNode.isArray() is true, wrap the single error object emitted by writeJsonRpcError(...) in a one-element JSON array (or use a new helper that emits an array response for batch requests) so batch requests receive an array response; apply the same array-wrapping change to the earlier batch-rejection path that currently returns a bare object.
89-97:⚠️ Potential issue | 🔴 CriticalCritical: request body is read with no size cap — DoS vector and direct contradiction of the PR goal.
readBody(req.getInputStream())at Line 92 calls into the helper at Lines 125–133, which copies the entire request stream into aByteArrayOutputStreamwith no limit. A client sending a largeContent-Length(or chunked body) will force the servlet to allocate the full payload in heap before any of the new validations (batch size, address size, response size) get a chance to run, so the very thing this PR is meant to mitigate is bypassed at the entry point.Additionally, the catch on Line 94 maps every
IOExceptiontoPARSE_ERROR (-32700). Transport-level failures (client abort, socket reset, read timeout) are not parse errors and shouldn't be reported as such — and once a request-size cap is added, oversized requests should map toEXCEED_LIMIT (-32005), not-32700.🔒️ Suggested direction
- byte[] body; - JsonNode rootNode; - try { - body = readBody(req.getInputStream()); - rootNode = MAPPER.readTree(body); - } catch (IOException e) { - writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); - return; - } + byte[] body; + try { + body = readBody(req.getInputStream(), parameter.getJsonRpcMaxRequestSize()); + } catch (RequestTooLargeException e) { + writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, + "Request body exceeds the limit of " + parameter.getJsonRpcMaxRequestSize(), null); + return; + } + JsonNode rootNode; + try { + rootNode = MAPPER.readTree(body); + } catch (JsonProcessingException e) { + writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); + return; + } @@ - private byte[] readBody(InputStream in) throws IOException { + private byte[] readBody(InputStream in, int maxSize) throws IOException { ByteArrayOutputStream buffer = new ByteArrayOutputStream(); byte[] tmp = new byte[4096]; - int n; - while ((n = in.read(tmp)) != -1) { + int n; + int total = 0; + while ((n = in.read(tmp)) != -1) { + total += n; + if (maxSize > 0 && total > maxSize) { + throw new RequestTooLargeException("Request body exceeds " + maxSize + " bytes"); + } buffer.write(tmp, 0, n); } return buffer.toByteArray(); }Also applies to: 125-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 89 - 97, The request body is currently read without any size cap via readBody(InputStream) in JsonRpcServlet, enabling a DoS; modify the readBody helper used by JsonRpcServlet to accept a maximum allowed bytes parameter and enforce it while streaming (stop reading and throw a specific exception when the limit is exceeded), then update the JsonRpcServlet request handling to catch that size-exceeded exception and call writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, "Request body exceeds allowed size", null); also distinguish transport IO failures from parse errors by returning a different error (e.g., map plain IOExceptions to a non-parse error or rethrow to be handled as a transport error) so only actual JSON parse failures map to JsonRpcError.PARSE_ERROR in the try/catch around MAPPER.readTree.
89-103:⚠️ Potential issue | 🟠 MajorEmpty body NPE and
maxBatchSize=0operator trap.Two related correctness gaps in the parse/validate block:
MAPPER.readTree(body)returnsnullfor an empty/whitespace-only body, so Line 98'srootNode.isArray()will throwNullPointerException. The NPE escapes theIOExceptioncatch on Line 94, gets wrapped by thecatch (Exception e)on Line 111, and the client receives an HTTP 500 instead of the structured-32700 Parse errorthis code path is meant to produce. TreatrootNode == null || rootNode.isMissingNode()the same as a parse failure.The batch guard on Line 98 is
rootNode.size() > parameter.getJsonRpcMaxBatchSize()with no> 0precondition. The PR description and the sibling limits (maxResponseSizeinBufferedResponseWrapper.checkLimit,maxAddressSizeinLogFilter) all document0as "no limit", but heremaxBatchSize=0rejects every non-empty batch with-32005. AddgetJsonRpcMaxBatchSize() > 0 &&so the three limits share the same semantics.🛠️ Proposed fix
- if (rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()) { + if (rootNode == null || rootNode.isMissingNode()) { + writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null); + return; + } + int maxBatch = parameter.getJsonRpcMaxBatchSize(); + if (rootNode.isArray() && maxBatch > 0 && rootNode.size() > maxBatch) { writeJsonRpcError(resp, JsonRpcError.EXCEED_LIMIT, - "Batch size " + rootNode.size() + " exceeds the limit of " - + parameter.getJsonRpcMaxBatchSize(), null); + "Batch size " + rootNode.size() + " exceeds the limit of " + maxBatch, null); return; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 89 - 103, The parse/validate block in JsonRpcServlet currently assumes MAPPER.readTree(body) returns a non-null node and treats getJsonRpcMaxBatchSize()==0 as a strict limit; modify the logic in the try/catch after calling readBody/readTree so that if rootNode is null or rootNode.isMissingNode() you call writeJsonRpcError(resp, JsonRpcError.PARSE_ERROR, "Parse error", null) and return (same behavior as on IOException), and change the batch-size guard to only apply when parameter.getJsonRpcMaxBatchSize() > 0 (i.e. use getJsonRpcMaxBatchSize() > 0 && rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()), keeping the existing writeJsonRpcError(JsonRpcError.EXCEED_LIMIT, ...) and return behavior for that case.
🧹 Nitpick comments (1)
framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java (1)
14-165: Add coverage forgetWriter().The wrapper's
getWriter()path (added in this PR) is not exercised by any test. At minimum, please add cases that verify (a) writes viagetWriter()are still subject to the samemaxBytesoverflow detection, (b) bytes written via the writer are flushed and committed bycommitToResponse(), and (c) non-ASCII characters round-trip correctly under the response's character encoding. Without these, the encoding/caching concerns called out on the main class will silently regress.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java` around lines 14 - 165, Add JUnit tests that exercise BufferedResponseWrapper.getWriter(): create wrappers with small maxBytes and verify writes via getWriter() trigger isOverflow() the same way as getOutputStream(); verify that text written through getWriter() is flushed/committed when commitToResponse() is called by asserting MockHttpServletResponse content and content length after commitToResponse(); and add a test that sets a UTF-8 charset on the wrapper/response (e.g., via setContentType or setCharacterEncoding), writes non-ASCII text via getWriter(), calls commitToResponse(), and asserts the read-back bytes/string matches the original to ensure correct encoding round-trip. Ensure tests reference BufferedResponseWrapper.getWriter(), BufferedResponseWrapper.commitToResponse(), BufferedResponseWrapper.isOverflow(), and MockHttpServletResponse content accessors.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 117-120: getWriter() currently creates a new PrintWriter using the
JVM default charset and doesn't respect the response encoding or the servlet
contract; change getWriter() in BufferedResponseWrapper to return a single
cached PrintWriter instance (create it lazily), construct it with an
OutputStreamWriter that uses the servlet response's character encoding (e.g.,
getCharacterEncoding() or default to UTF-8), ensure getWriter() throws
IllegalStateException if getOutputStream() was already called, and likewise make
getOutputStream() throw if the writer was already created; also ensure the
writer is flushed/closed properly (or its encoder flushed) when the wrapper is
committed so no buffered characters are dropped.
---
Duplicate comments:
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-123: The catch in JsonRpcServlet around rpcServer.handle(...)
currently converts all exceptions to IOException which yields HTTP 500 and no
JSON-RPC body; remove that catch or replace it with logic that logs the
exception and calls writeJsonRpcError(..., JsonRpcError.INTERNAL_ERROR, ...,
idNode) so the servlet returns HTTP 200 with a JSON-RPC -32603 error (use the
same idNode determination as below). Also fix the overflow handling: when
bufferedResp.isOverflow() and rootNode.isArray() is true, wrap the single error
object emitted by writeJsonRpcError(...) in a one-element JSON array (or use a
new helper that emits an array response for batch requests) so batch requests
receive an array response; apply the same array-wrapping change to the earlier
batch-rejection path that currently returns a bare object.
- Around line 89-97: The request body is currently read without any size cap via
readBody(InputStream) in JsonRpcServlet, enabling a DoS; modify the readBody
helper used by JsonRpcServlet to accept a maximum allowed bytes parameter and
enforce it while streaming (stop reading and throw a specific exception when the
limit is exceeded), then update the JsonRpcServlet request handling to catch
that size-exceeded exception and call writeJsonRpcError(resp,
JsonRpcError.EXCEED_LIMIT, "Request body exceeds allowed size", null); also
distinguish transport IO failures from parse errors by returning a different
error (e.g., map plain IOExceptions to a non-parse error or rethrow to be
handled as a transport error) so only actual JSON parse failures map to
JsonRpcError.PARSE_ERROR in the try/catch around MAPPER.readTree.
- Around line 89-103: The parse/validate block in JsonRpcServlet currently
assumes MAPPER.readTree(body) returns a non-null node and treats
getJsonRpcMaxBatchSize()==0 as a strict limit; modify the logic in the try/catch
after calling readBody/readTree so that if rootNode is null or
rootNode.isMissingNode() you call writeJsonRpcError(resp,
JsonRpcError.PARSE_ERROR, "Parse error", null) and return (same behavior as on
IOException), and change the batch-size guard to only apply when
parameter.getJsonRpcMaxBatchSize() > 0 (i.e. use getJsonRpcMaxBatchSize() > 0 &&
rootNode.isArray() && rootNode.size() > parameter.getJsonRpcMaxBatchSize()),
keeping the existing writeJsonRpcError(JsonRpcError.EXCEED_LIMIT, ...) and
return behavior for that case.
---
Nitpick comments:
In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`:
- Around line 14-165: Add JUnit tests that exercise
BufferedResponseWrapper.getWriter(): create wrappers with small maxBytes and
verify writes via getWriter() trigger isOverflow() the same way as
getOutputStream(); verify that text written through getWriter() is
flushed/committed when commitToResponse() is called by asserting
MockHttpServletResponse content and content length after commitToResponse(); and
add a test that sets a UTF-8 charset on the wrapper/response (e.g., via
setContentType or setCharacterEncoding), writes non-ASCII text via getWriter(),
calls commitToResponse(), and asserts the read-back bytes/string matches the
original to ensure correct encoding round-trip. Ensure tests reference
BufferedResponseWrapper.getWriter(), BufferedResponseWrapper.commitToResponse(),
BufferedResponseWrapper.isOverflow(), and MockHttpServletResponse content
accessors.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 18ece4a9-30cb-4831-82a4-0314be2d3298
📒 Files selected for processing (3)
framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.javaframework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.javaframework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java
| @Override | ||
| public PrintWriter getWriter() { | ||
| return new PrintWriter(outputStream, true); | ||
| } |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
Java PrintWriter(OutputStream, boolean autoFlush) constructor default charset
💡 Result:
For Java’s PrintWriter(OutputStream out, boolean autoFlush) constructor, the characters are encoded using the JVM’s default character encoding (i.e., the default charset of that JVM).
Citations:
- 1: https://docs.oracle.com/javase/10/docs/api/java/io/PrintWriter.html
- 2: https://docs.oracle.com/javase/8/docs/api/java/io/PrintWriter.html
- 3: https://docs.oracle.com/en/java/javase/25/docs/api/java.base/java/io/PrintWriter.html
🏁 Script executed:
cat -n framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java | head -140Repository: 317787106/java-tron
Length of output: 4481
getWriter() uses JVM default charset and creates a new PrintWriter on every call.
PrintWriter(OutputStream, boolean) constructs the underlying OutputStreamWriter with Charset.defaultCharset(), not the response's character encoding. On hosts where the JVM default isn't UTF-8 (e.g., Windows Cp1252, some containerized JVMs), any non-ASCII bytes produced by jsonrpc4j via getWriter() will be mis-encoded and the JSON body will be corrupted. The Servlet specification also requires repeated getWriter() calls to return the same instance and to throw IllegalStateException if getOutputStream() was already used; both invariants are currently violated. Discarding a PrintWriter without flushing can also drop characters still buffered inside its internal StreamEncoder.
🛡️ Proposed fix
+import java.io.OutputStreamWriter;
+import java.nio.charset.Charset;
+import java.nio.charset.StandardCharsets;
@@
private String contentType;
+ private PrintWriter writer;
+ private boolean outputStreamUsed;
`@Getter`
private boolean overflow = false;
@@
`@Override`
public ServletOutputStream getOutputStream() {
+ if (writer != null) {
+ throw new IllegalStateException("getWriter() has already been called");
+ }
+ outputStreamUsed = true;
return outputStream;
}
`@Override`
public PrintWriter getWriter() {
- return new PrintWriter(outputStream, true);
+ if (outputStreamUsed) {
+ throw new IllegalStateException("getOutputStream() has already been called");
+ }
+ if (writer == null) {
+ String enc = getCharacterEncoding();
+ Charset cs = (enc == null) ? StandardCharsets.UTF_8 : Charset.forName(enc);
+ writer = new PrintWriter(new OutputStreamWriter(outputStream, cs), true);
+ }
+ return writer;
}
public void commitToResponse() throws IOException {
+ if (writer != null) {
+ writer.flush();
+ }
if (contentType != null) {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`
around lines 117 - 120, getWriter() currently creates a new PrintWriter using
the JVM default charset and doesn't respect the response encoding or the servlet
contract; change getWriter() in BufferedResponseWrapper to return a single
cached PrintWriter instance (create it lazily), construct it with an
OutputStreamWriter that uses the servlet response's character encoding (e.g.,
getCharacterEncoding() or default to UTF-8), ensure getWriter() throws
IllegalStateException if getOutputStream() was already called, and likewise make
getOutputStream() throw if the writer was already created; also ensure the
writer is flushed/closed properly (or its encoder flushed) when the wrapper is
committed so no buffered characters are dropped.
b0c8e8a to
e70ea6b
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
♻️ Duplicate comments (5)
framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java (3)
109-113:⚠️ Potential issue | 🟠 MajorRe-throwing as
IOExceptioncan bypass JSON-RPC structured errors.Line 111–113 wraps handler exceptions into
IOException, which can surface as servlet/container HTTP 500 rather than JSON-RPC error envelopes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 109 - 113, The current catch around rpcServer.handle(cachedReq, bufferedResp) rethrows all exceptions as IOException, which can turn handler errors into raw HTTP 500 responses instead of JSON-RPC error envelopes; change the handler so that exceptions are caught and translated into a proper JSON-RPC error written to bufferedResp (or let rpcServer produce the error envelope) rather than throwing IOException — i.e., in the catch block use rpcServer or a utility to build a JSON-RPC error object for cachedReq and write it to bufferedResp (and log the exception) so clients receive a valid JSON-RPC error response instead of an HTTP exception.
89-97:⚠️ Potential issue | 🔴 CriticalRequest body is still read unbounded into memory.
Line 92 + Lines 125–133 buffer the entire request with no cap. This keeps an OOM/heap-pressure DoS path open before validation. Also, Line 94 maps all
IOExceptionto parse error, which can misclassify transport failures.Also applies to: 125-133
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 89 - 97, The code currently reads the entire request into memory via readBody and then passes the full byte[] to MAPPER.readTree, which allows unbounded allocation and also maps all IOExceptions to JsonRpcError.PARSE_ERROR; update JsonRpcServlet to limit and validate request size by replacing the unbounded readBody usage with a capped read (or InputStream wrapper with a maxBytes threshold) or use Jackson's streaming parser directly on the request InputStream to enforce a maximum allowed body size, return a size-related error (e.g., WRITE a JsonRpcError indicating payload too large) when the cap is exceeded, and distinguish IO exceptions caused by transport failures from JSON parse errors by catching and mapping JsonProcessingException / Jackson parsing exceptions to JsonRpcError.PARSE_ERROR while mapping plain IOExceptions to an appropriate transport/IO error response via writeJsonRpcError (references: JsonRpcServlet, readBody, MAPPER.readTree, writeJsonRpcError, JsonRpcError.PARSE_ERROR).
98-102:⚠️ Potential issue | 🟠 MajorBatch limit handling still violates two contracts.
Line 98 lacks a
maxBatchSize > 0guard, somaxBatchSize=0rejects every non-empty batch. Also, for an oversized batch, Line 99–101 returns a single error object instead of a batch-array response shape.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java` around lines 98 - 102, The batch-limit check should only apply when parameter.getJsonRpcMaxBatchSize() > 0 and oversized-batch responses must be a JSON-RPC batch (an array), not a single error object. Change the if condition that uses rootNode.isArray() to also require parameter.getJsonRpcMaxBatchSize() > 0, and when the batch is too large call a routine that writes a batch-shaped error (an array containing the JsonRpcError.EXCEED_LIMIT object) instead of writeJsonRpcError's single-object response; you can either add a new helper (e.g., writeJsonRpcErrorBatch) or extend writeJsonRpcError to accept a flag to wrap the error in an array — reference rootNode, parameter.getJsonRpcMaxBatchSize(), writeJsonRpcError, and JsonRpcError.EXCEED_LIMIT to locate the change.framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java (2)
69-71:⚠️ Potential issue | 🟠 MajorWriter/stream handling still breaks servlet contract and can mis-encode output.
Line 69 uses JVM-default charset (
PrintWriter(OutputStream, ...)), Line 115/120 allow both writer and stream paths, and Line 124–131 doesn’t flush writer before commit. This can produce corrupted or truncated JSON.Suggested fix
import java.io.ByteArrayOutputStream; import java.io.IOException; +import java.io.OutputStreamWriter; import java.io.PrintWriter; +import java.nio.charset.Charset; +import java.nio.charset.StandardCharsets; @@ - private final PrintWriter writer = new PrintWriter(outputStream, true); + private PrintWriter writer; + private boolean outputStreamUsed; @@ `@Override` public ServletOutputStream getOutputStream() { + if (writer != null) { + throw new IllegalStateException("getWriter() has already been called"); + } + outputStreamUsed = true; return outputStream; } @@ public PrintWriter getWriter() { - return writer; + if (outputStreamUsed) { + throw new IllegalStateException("getOutputStream() has already been called"); + } + if (writer == null) { + String enc = getCharacterEncoding(); + Charset cs = (enc == null) ? StandardCharsets.UTF_8 : Charset.forName(enc); + writer = new PrintWriter(new OutputStreamWriter(outputStream, cs), true); + } + return writer; } @@ public void commitToResponse() throws IOException { + if (writer != null) { + writer.flush(); + } if (contentType != null) { actual.setContentType(contentType); }Also applies to: 114-122, 124-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java` around lines 69 - 71, BufferedResponseWrapper currently constructs a PrintWriter with the platform default charset and allows both writer and stream paths to be used and doesn't flush the writer before commit, causing mis-encoded or truncated output; fix by replacing the eager PrintWriter creation with a lazily-initialized Writer that wraps the existing outputStream using an explicit charset (e.g. UTF-8) via an OutputStreamWriter, enforce mutual exclusivity between getWriter() and getOutputStream() (throw IllegalStateException if the other has been obtained) in the methods that expose them, and ensure commit() (or the method that finalizes response) flushes and closes the writer if non-null before writing/committing the outputStream so all bytes are correctly encoded and flushed.
29-31:⚠️ Potential issue | 🟡 MinorStatus/header buffering is incomplete.
Line 29 tracks status internally, but there is no
getStatus()override. Also,setHeader/addHeader("Content-Length", ...)are not intercepted, so downstream code can leakContent-Lengthto the real response before wrapper commit.Also applies to: 90-102, 124-131
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java` around lines 29 - 31, BufferedResponseWrapper currently tracks status in the private field status but does not override getStatus(), and it doesn't intercept header methods so downstream code can write Content-Length to the real response prematurely; update BufferedResponseWrapper to override getStatus() to return the internal status field and override all mutable status APIs (setStatus(int), setStatus(int,String), sendError(...)) to update that field; also override header APIs that can convey content length (setHeader(String,String), addHeader(String,String), setIntHeader(String,int), addIntHeader(String,int)) to detect "Content-Length" (case-insensitive), store/ignore it in the wrapper's buffered headers rather than delegating to the underlying response until commit, and ensure commit writes the buffered Content-Length appropriately.
🧹 Nitpick comments (1)
framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java (1)
25-165: Please add writer-path contract tests to prevent regressions.Current tests only exercise
getOutputStream(). Add cases forgetWriter()encoding/flush behavior and IllegalStateException exclusivity betweengetWriter()andgetOutputStream().🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java` around lines 25 - 165, Add tests exercising the writer path for BufferedResponseWrapper: create tests that call getWriter() to write UTF-8 text, flush and/or close the writer, then call commitToResponse() and assert the Mock response body bytes and content length match the written text and encoding; also add tests that calling getWriter() after using getOutputStream() (and vice versa) throws IllegalStateException to enforce exclusivity between getWriter() and getOutputStream(). Target the same test class and use the existing mockResp and methods like getWriter(), getOutputStream(), commitToResponse(), isOverflow(), and getContentAsByteArray() to validate behavior and prevent regressions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@common/src/main/resources/reference.conf`:
- Around line 403-411: Add the missing node.jsonrpc.maxRequestTimeout
configuration key alongside the other JSON-RPC guards (near maxBatchSize,
maxResponseSize, maxAddressSize); define a sensible default duration (for
example node.jsonrpc.maxRequestTimeout = 30s) so operators can override request
timeout behavior via config and the node.jsonrpc namespace is complete.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-110: The call to rpcServer.handle(cachedReq, bufferedResp) in
JsonRpcServlet must be executed with a timeout and cancellation to satisfy the
request-timeout objective; replace the direct synchronous call with submitting a
task (Runnable/Callable) to an ExecutorService or use CompletableFuture, call
get(timeout, unit) and on TimeoutException cancel the future, set bufferedResp
status/content to a JSON-RPC error object with code -32002 and an appropriate
message, and ensure any thrown exceptions are handled (InterruptedException,
ExecutionException) and the task is cancelled to free resources; reference
rpcServer.handle, cachedReq, bufferedResp and the JsonRpcServlet request
handling block when making this change.
---
Duplicate comments:
In
`@framework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.java`:
- Around line 69-71: BufferedResponseWrapper currently constructs a PrintWriter
with the platform default charset and allows both writer and stream paths to be
used and doesn't flush the writer before commit, causing mis-encoded or
truncated output; fix by replacing the eager PrintWriter creation with a
lazily-initialized Writer that wraps the existing outputStream using an explicit
charset (e.g. UTF-8) via an OutputStreamWriter, enforce mutual exclusivity
between getWriter() and getOutputStream() (throw IllegalStateException if the
other has been obtained) in the methods that expose them, and ensure commit()
(or the method that finalizes response) flushes and closes the writer if
non-null before writing/committing the outputStream so all bytes are correctly
encoded and flushed.
- Around line 29-31: BufferedResponseWrapper currently tracks status in the
private field status but does not override getStatus(), and it doesn't intercept
header methods so downstream code can write Content-Length to the real response
prematurely; update BufferedResponseWrapper to override getStatus() to return
the internal status field and override all mutable status APIs (setStatus(int),
setStatus(int,String), sendError(...)) to update that field; also override
header APIs that can convey content length (setHeader(String,String),
addHeader(String,String), setIntHeader(String,int), addIntHeader(String,int)) to
detect "Content-Length" (case-insensitive), store/ignore it in the wrapper's
buffered headers rather than delegating to the underlying response until commit,
and ensure commit writes the buffered Content-Length appropriately.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`:
- Around line 109-113: The current catch around rpcServer.handle(cachedReq,
bufferedResp) rethrows all exceptions as IOException, which can turn handler
errors into raw HTTP 500 responses instead of JSON-RPC error envelopes; change
the handler so that exceptions are caught and translated into a proper JSON-RPC
error written to bufferedResp (or let rpcServer produce the error envelope)
rather than throwing IOException — i.e., in the catch block use rpcServer or a
utility to build a JSON-RPC error object for cachedReq and write it to
bufferedResp (and log the exception) so clients receive a valid JSON-RPC error
response instead of an HTTP exception.
- Around line 89-97: The code currently reads the entire request into memory via
readBody and then passes the full byte[] to MAPPER.readTree, which allows
unbounded allocation and also maps all IOExceptions to JsonRpcError.PARSE_ERROR;
update JsonRpcServlet to limit and validate request size by replacing the
unbounded readBody usage with a capped read (or InputStream wrapper with a
maxBytes threshold) or use Jackson's streaming parser directly on the request
InputStream to enforce a maximum allowed body size, return a size-related error
(e.g., WRITE a JsonRpcError indicating payload too large) when the cap is
exceeded, and distinguish IO exceptions caused by transport failures from JSON
parse errors by catching and mapping JsonProcessingException / Jackson parsing
exceptions to JsonRpcError.PARSE_ERROR while mapping plain IOExceptions to an
appropriate transport/IO error response via writeJsonRpcError (references:
JsonRpcServlet, readBody, MAPPER.readTree, writeJsonRpcError,
JsonRpcError.PARSE_ERROR).
- Around line 98-102: The batch-limit check should only apply when
parameter.getJsonRpcMaxBatchSize() > 0 and oversized-batch responses must be a
JSON-RPC batch (an array), not a single error object. Change the if condition
that uses rootNode.isArray() to also require parameter.getJsonRpcMaxBatchSize()
> 0, and when the batch is too large call a routine that writes a batch-shaped
error (an array containing the JsonRpcError.EXCEED_LIMIT object) instead of
writeJsonRpcError's single-object response; you can either add a new helper
(e.g., writeJsonRpcErrorBatch) or extend writeJsonRpcError to accept a flag to
wrap the error in an array — reference rootNode,
parameter.getJsonRpcMaxBatchSize(), writeJsonRpcError, and
JsonRpcError.EXCEED_LIMIT to locate the change.
---
Nitpick comments:
In
`@framework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java`:
- Around line 25-165: Add tests exercising the writer path for
BufferedResponseWrapper: create tests that call getWriter() to write UTF-8 text,
flush and/or close the writer, then call commitToResponse() and assert the Mock
response body bytes and content length match the written text and encoding; also
add tests that calling getWriter() after using getOutputStream() (and vice
versa) throws IllegalStateException to enforce exclusivity between getWriter()
and getOutputStream(). Target the same test class and use the existing mockResp
and methods like getWriter(), getOutputStream(), commitToResponse(),
isOverflow(), and getContentAsByteArray() to validate behavior and prevent
regressions.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: ef9899cb-90af-4c2b-aa17-169cb7facbde
📒 Files selected for processing (10)
common/src/main/java/org/tron/common/parameter/CommonParameter.javacommon/src/main/java/org/tron/core/config/args/NodeConfig.javacommon/src/main/resources/reference.confframework/src/main/java/org/tron/core/config/args/Args.javaframework/src/main/java/org/tron/core/services/filter/BufferedResponseWrapper.javaframework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.javaframework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.javaframework/src/main/java/org/tron/core/services/jsonrpc/filters/LogFilter.javaframework/src/main/resources/config.confframework/src/test/java/org/tron/core/services/filter/BufferedResponseWrapperTest.java
✅ Files skipped from review due to trivial changes (2)
- framework/src/main/java/org/tron/core/config/args/Args.java
- common/src/main/java/org/tron/core/config/args/NodeConfig.java
🚧 Files skipped from review as they are similar to previous changes (3)
- common/src/main/java/org/tron/common/parameter/CommonParameter.java
- framework/src/main/resources/config.conf
- framework/src/main/java/org/tron/core/services/filter/CachedBodyRequestWrapper.java
|
|
||
| # Maximum number of requests in a JSON-RPC batch, >0 otherwise no limit | ||
| maxBatchSize = 100 | ||
|
|
||
| # Maximum response body size in bytes for JSON-RPC (default 25MB) | ||
| maxResponseSize = 26214400 | ||
|
|
||
| # Maximum number of addresses in a single JSON-RPC request | ||
| maxAddressSize = 1000 |
There was a problem hiding this comment.
Missing maxRequestTimeout configuration key from the stated objective.
Line 403–411 adds the size-related guards, but the linked objective also requires configurable request timeout control. Without a node.jsonrpc.maxRequestTimeout default here, operators cannot enable/override timeout behavior consistently via config.
Suggested config addition
jsonrpc {
@@
# Maximum number of addresses in a single JSON-RPC request
maxAddressSize = 1000
+
+ # Maximum execution time per JSON-RPC request in milliseconds, <=0 means no limit
+ maxRequestTimeout = 0
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@common/src/main/resources/reference.conf` around lines 403 - 411, Add the
missing node.jsonrpc.maxRequestTimeout configuration key alongside the other
JSON-RPC guards (near maxBatchSize, maxResponseSize, maxAddressSize); define a
sensible default duration (for example node.jsonrpc.maxRequestTimeout = 30s) so
operators can override request timeout behavior via config and the node.jsonrpc
namespace is complete.
| try { | ||
| rpcServer.handle(cachedReq, bufferedResp); |
There was a problem hiding this comment.
Request timeout objective is not enforced at execution point.
Line 109 executes rpcServer.handle(...) synchronously with no timeout boundary, but the linked objective requires request timeout control (-32002 on timeout).
I can propose a minimal timeout wrapper pattern (with cancellation + JSON-RPC -32002 mapping) if you want me to draft it.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@framework/src/main/java/org/tron/core/services/jsonrpc/JsonRpcServlet.java`
around lines 109 - 110, The call to rpcServer.handle(cachedReq, bufferedResp) in
JsonRpcServlet must be executed with a timeout and cancellation to satisfy the
request-timeout objective; replace the direct synchronous call with submitting a
task (Runnable/Callable) to an ExecutorService or use CompletableFuture, call
get(timeout, unit) and on TimeoutException cancel the future, set bufferedResp
status/content to a JSON-RPC error object with code -32002 and an appropriate
message, and ensure any thrown exceptions are handled (InterruptedException,
ExecutionException) and the task is cancelled to free resources; reference
rpcServer.handle, cachedReq, bufferedResp and the JsonRpcServlet request
handling block when making this change.
What does this PR do?
Adds configurable resource limits to the JSON-RPC endpoint to prevent memory exhaustion and abuse from oversized requests or responses. Closes tronprotocol#6632
Changes:
Batch size limit (
node.jsonrpc.maxBatchSize, default: 100)-32005(exceed limit).maxBatchSize ≤ 0(no limit).Response size limit (
node.jsonrpc.maxResponseSize, default: 25 MB)BufferedResponseWrapper: interceptsgetOutputStream()andgetWriter()writes into an in-memory buffer. When a write would exceed the configured limit, it sets anoverflowflag and resets the buffer instead of continuing to accumulate bytes, bounding worst-case memory usage to at mostmaxResponseSize.CachedBodyRequestWrapper: replays the pre-read request body via bothgetInputStream()andgetReader(), so the body can be inspected before being forwarded toJsonRpcServer.isOverflow()and — if set — discards the partial buffer and returns error code-32003(response too large).Address list limit (
node.jsonrpc.maxAddressSize, default: 1000)LogFilter, validates theaddressarray length ineth_getLogs/eth_newFilterrequests.JsonRpcInvalidParamsException.Structured JSON-RPC error responses
writeJsonRpcErrorusesObjectMapperto build error responses safely, avoiding JSON injection from error messages.-32700parse error,-32005exceed limit,-32003response too large.Why are these changes required?
maxResponseSizeand fails fast rather than buffering the entire response before checking.Configuration
This PR has been tested by:
BufferedResponseWrapperTest)