JSON Formatter
Format, validate & minify JSON data. Error detection included.
What is JSON formatting?
JSON, defined in RFC 8259, is the most common text-based data interchange format on the web. Every REST API response, every package.json, every CloudWatch event, every Slack webhook payload ships JSON over the wire. The format is deliberately small: six structural tokens ({, }, [, ], ,, :), four literals (true, false, null, plus numbers), and double-quoted strings. That minimalism is also the readability problem. Servers serialize JSON with no whitespace to save bytes, and the resulting single-line blob is illegible past about 200 characters.
Formatting, validation, and minification are three distinct operations on the same parse tree. Formatting (also called pretty-printing or beautifying) inserts indentation and newlines so structure becomes scannable. Validation parses the input and reports whether it conforms to RFC 8259. Minification strips every byte of optional whitespace for production transport. Worked example: the minified API response {"id":42,"name":"Ada","active":true,"tags":["admin","beta"],"score":98.6} is one line of 71 bytes. Reformatted with a 2-space indent, the same payload spans 9 lines, jumps to 117 bytes on disk, and reads in under a second. We round-trip via JSON.parse followed by JSON.stringify(obj, null, 2), so the bytes we emit are byte-equivalent to the input modulo whitespace and key order.
How JSON parsing works
RFC 8259 is strict by design. There are no trailing commas after the last element, no // or /* */ comments, no single-quoted strings, no unquoted object keys, no hex literals, no NaN or Infinity. Lenient dialects exist for specific niches: JSONC (used by tsconfig.json and VS Code settings.json) permits comments, and JSON5 adds trailing commas, single quotes, and unquoted keys for hand-edited config. We deliberately implement the strict spec because that is what the network actually carries: every production HTTP API on the planet emits RFC 8259 JSON, and a permissive parser hides the bug instead of surfacing it.
Four failure modes account for nearly every parse error we have seen: unmatched brackets (an opener with no closer, or vice versa), missing commas between fields, unquoted keys carried over from a JavaScript object literal, and trailing commas left behind by a copy-paste from a comma-separated list. Worked example: paste {"a":1,"b":2,} and a strict parser raises SyntaxError: Unexpected token } in JSON at position 13. The character offset (13) points at the closing brace, but the actual offender is the comma at offset 12. Reading the offset right-to-left against the four-failure list resolves the diagnosis in seconds. RFC 6901 (JSON Pointer) and JSON Schema draft 2020-12 are adjacent specifications layered on top of the same syntax: JSON Pointer addresses a node by path (/tags/0), and JSON Schema validates structure against a contract. We do not validate against a schema here. We validate that the bytes are well-formed RFC 8259, which is a prerequisite for everything else.
Common pitfalls
Most JSON debugging time is not spent on the parser error itself. It is spent on the assumptions we carried into the payload: that comments survive the wire, that numbers round-trip, that key order is preserved, that one editor's line endings match another's. Four pitfalls account for the bulk of the production incidents we have triaged.
- Trusting
JSON.parseto silently re-order nothing. ECMAScript specifies thatJSON.parsepreserves insertion order on object properties (modulo integer-like keys, which sort numerically). Downstream consumers that depend on a canonical key order break the moment a serializer rewrites in alphabetical order: signature verification over canonical JSON (RFC 8785, JCS), content-addressed storage, and merkle-rooted audit logs all fail when the same logical document hashes to two different digests. We never re-sort keys here for that reason. - Pasting JSON-with-comments (JSONC) and wondering why it fails. RFC 8259 section 6 explicitly forbids
//and/* */tokens. Atsconfig.jsonordevcontainer.jsonpasted straight into a strict parser raises an error at the first comment character. The fix is to strip comments first (a one-liner regex, or Microsoft'sjsonc-parsernpm package) or switch to a JSON5 parser if trailing commas are also in play. - Treating numbers loosely. JSON has one numeric type, IEEE 754 double precision, with a safe-integer range of -2^53+1 to 2^53-1 (Number.MAX_SAFE_INTEGER is 9,007,199,254,740,991). The literal
1e308parses cleanly but a subsequent multiply or serialize round-trip overflows toInfinity, which is not legal JSON and serializes tonull. Snowflake IDs and 64-bit timestamps lose precision silently above 2^53. We ship them as strings. - Mixing line endings inside multi-line strings. JSON allows
\n,\r, and\r\ninside string values, and a payload authored on Windows can carry CRLF where the downstream parser expects LF only. Some log-aggregator regex rules, YAML loaders, and shell heredocs break on CR. We normalize to LF on write and reject mixed endings on read in any pipeline that touches both platforms.
When to use this tool
We built the formatter for three concrete workflows we kept hitting ourselves. The first is a backend dev debugging a 400 response from an upstream API. The vendor returns a 600-byte error envelope on a single line, the curl pipe in the terminal wraps it into an unreadable smear, and the on-call clock is ticking. Paste, reformat with 2-space indent, read the error.code and error.fields[] path in under ten seconds. The second is a reviewer reading a large config file in a code-review tab: a 4KB Kubernetes manifest, a Terraform state slice, or a feature-flag bundle that arrived minified from a teammate's test export. Reformatting before review surfaces the diff that matters and hides the whitespace noise that does not. The third is an ops engineer flattening multi-line logs into single-line minified format for log-aggregator ingestion. ELK, Datadog, and Splunk all expect one event per line, and a multi-line stack trace embedded inside a JSON event breaks line-based parsers without minification first. The same three workflows drive the keyboard shortcut layout in the tool: format, minify, and copy are all one click apart, because that is the rhythm we move at when we are actually using it.
Frequently asked
- Does this formatter modify my data?
- No. Processing is 100% client-side via the native `JSON.parse` and `JSON.stringify` round-trip, so the document never leaves the browser. We add whitespace for display only and preserve the input key order exactly as parsed. We deliberately do not re-sort keys, because canonical-JSON consumers (signature verification, content-addressed storage) can break when key order shifts. The bytes we emit are byte-equivalent to the input modulo whitespace.
- Why does my JSON have an 'Unexpected token' error?
- Four causes account for nearly every case under RFC 8259. First, a trailing comma after the last array element or object property. Second, a missing comma between two fields. Third, single quotes around strings (RFC 8259 mandates double quotes). Fourth, unquoted object keys (legal in JavaScript object literals, illegal in JSON). The error position our parser surfaces points at the exact character offset, so we read it left-to-right against those four patterns.
- Should I use 2-space or 4-space indentation?
- We default to 2-space. Prettier defaults to 2, the Node.js docs use 2, and `JSON.stringify(obj, null, 2)` is the universal idiom. 4-space indent helps when nesting runs five or more levels deep and the eye loses track of structure. 2-space wins for everything else: smaller diffs in source control, fewer wrapped lines on narrow terminals, less wasted screen width on deeply indented payloads.
- Can it handle JSON with comments (JSONC)?
- No. RFC 8259 forbids comments outright, so a strict parser rejects any `//` or `/* */` token. JSONC is a separate dialect used by VS Code's `tsconfig.json` and `settings.json`, and it requires a dedicated parser like Microsoft's `jsonc-parser` package. The simplest workaround is to strip single-line and block comments before pasting. JSON5 adds further extensions (trailing commas, single quotes, unquoted keys) and is also unsupported here.
- Is it safe to paste sensitive data here?
- Yes. Every operation runs in the browser via `JSON.parse` and `JSON.stringify`: no fetch call, no analytics beacon on the input, no server roundtrip. We host static HTML and the JavaScript bundle, nothing more. The realistic risk is not us, it is the screen itself: do not paste credentials in a shared screen-record, a screenshot bound for a ticket, or a coworking-space monitor visible to people walking past.
- What's the largest JSON file this can handle?
- Practical ceiling around 10MB before parsing slows visibly in modern Chromium and Firefox. Past 10MB, V8's parser still finishes but UI thread blocking becomes obvious. Our prismjs syntax highlighter is the tighter bottleneck and starts to drag on highly-nested files past roughly 5MB. For larger payloads we recommend a streaming parser: `clarinet` for SAX-style JavaScript, or `jq --stream` on the command line for terabyte-scale logs.