The Node.js Platform
The Node.js Platform
Node.js changed how we think about server-side JavaScript. To write efficient Node.js code, you must understand its architecture—particularly the event loop and the reactor pattern that makes non-blocking I/O possible.
The Philosophy of Node.js
Node.js was built with a specific philosophy that shapes everything about it:
Small Core
The Node.js core is intentionally minimal. It provides just enough to build upon:
- File system operations
- Networking (TCP, UDP, HTTP)
- Binary data handling
- Streams
- Basic utilities
Everything else comes from npm packages. This keeps the core stable and lets the ecosystem evolve rapidly.
Small Modules
The Unix philosophy: do one thing and do it well.
// Bad: monolithic module
import { parseJSON, validateSchema, transformData, saveToDb } from "mega-utils";
// Good: small, focused modules
import { parse } from "json-parser";
import { validate } from "schema-validator";
import { transform } from "data-transformer";Note
npm packages average about 100 lines of code. This isn't a limitation—it's a feature. Small modules are easier to test, understand, and replace.
Small Surface Area
Modules should expose minimal functionality. A good module does one thing and exposes one clear API.
// Large surface area (avoid)
export class DataProcessor {
parse() {}
validate() {}
transform() {}
save() {}
load() {}
cache() {}
// 20 more methods...
}
// Small surface area (preferred)
export function processData(input, options) {
// Does one thing well
return transformedData;
}The Reactor Pattern
Node.js is built on the reactor pattern—an event-driven architecture for handling I/O operations efficiently.
How It Works
- Application submits I/O request to the Event Demultiplexer
- Request is processed asynchronously (by OS or thread pool)
- When complete, an event is pushed to the Event Queue
- The Event Loop processes events one by one
- Associated callbacks are invoked
The Reactor Pattern in Action
┌─────────────────────────────────────────────────────────┐
│ Your Application │
│ │
│ fs.readFile('data.txt', callback) │
│ │ │
└───────────┼──────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Event Demultiplexer │
│ │
│ • Receives I/O requests │
│ • Delegates to OS or thread pool │
│ • Watches for completion │
│ │
└─────────────────────────────────────────────────────────┘
│
│ (I/O completes)
▼
┌─────────────────────────────────────────────────────────┐
│ Event Queue │
│ │
│ [readFile complete] → [timer fired] → [network data] │
│ │
└─────────────────────────────────────────────────────────┘
│
▼
┌─────────────────────────────────────────────────────────┐
│ Event Loop │
│ │
│ while (events in queue) { │
│ event = queue.dequeue(); │
│ event.callback(event.result); │
│ } │
│ │
└─────────────────────────────────────────────────────────┘Why This Matters
Traditional servers use one thread per connection:
Request 1 → Thread 1 (blocked waiting for DB)
Request 2 → Thread 2 (blocked waiting for file)
Request 3 → Thread 3 (blocked waiting for API)
...
Request 10000 → Out of memory!Node.js uses one thread for many connections:
Request 1 → Event Loop → "Read DB, call me back"
Request 2 → Event Loop → "Read file, call me back"
Request 3 → Event Loop → "Call API, call me back"
...
Request 10000 → Still just one thread, handling callbacksWarning
The single thread is a double-edged sword. One CPU-intensive operation blocks everything. Always offload heavy computation to worker threads or child processes.
The Event Loop Deep Dive
The event loop is not a simple queue. It has multiple phases, each with its own queue:
Event Loop Phases
┌───────────────────────────┐
┌─►│ timers │ ← setTimeout, setInterval
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ pending callbacks │ ← I/O callbacks deferred from previous loop
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ idle, prepare │ ← internal use only
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ poll │ ← retrieve new I/O events
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ check │ ← setImmediate callbacks
│ └─────────────┬─────────────┘
│ ┌─────────────┴─────────────┐
│ │ close callbacks │ ← socket.on('close', ...)
│ └─────────────┬─────────────┘
│ │
└────────────────┘Phase Details
1. Timers Phase
Executes callbacks scheduled by setTimeout() and setInterval().
setTimeout(() => console.log("timer 1"), 0);
setTimeout(() => console.log("timer 2"), 0);
// Both execute in the timers phase2. Poll Phase The heart of the event loop. It:
- Calculates how long to block waiting for I/O
- Processes events in the poll queue
3. Check Phase
Executes setImmediate() callbacks.
setImmediate(() => console.log("immediate"));
// Executes after poll phase completes4. Close Callbacks
Handles close events like socket.on('close', ...).
Microtasks: process.nextTick and Promises
Microtasks run between phases, not during a specific phase:
setTimeout(() => console.log("timeout"), 0);
setImmediate(() => console.log("immediate"));
Promise.resolve().then(() => console.log("promise"));
process.nextTick(() => console.log("nextTick"));
console.log("sync");
// Output:
// sync
// nextTick
// promise
// timeout (or immediate, order not guaranteed at top level)
// immediate (or timeout)Event Loop Execution Order
const fs = require("fs");
console.log("1: sync start");
setTimeout(() => console.log("2: timeout 0"), 0);
setImmediate(() => console.log("3: immediate"));
Promise.resolve().then(() => console.log("4: promise"));
process.nextTick(() => console.log("5: nextTick"));
fs.readFile(__filename, () => {
console.log("6: file read");
setTimeout(() => console.log("7: timeout in callback"), 0);
setImmediate(() => console.log("8: immediate in callback"));
process.nextTick(() => console.log("9: nextTick in callback"));
});
console.log("10: sync end");
// Output:
// 1: sync start
// 10: sync end
// 5: nextTick
// 4: promise
// 2: timeout 0
// 3: immediate
// 6: file read
// 9: nextTick in callback
// 8: immediate in callback ← setImmediate always before setTimeout inside I/O callback
// 7: timeout in callbackWarning
process.nextTick starves the event loop if used recursively. Prefer setImmediate for deferring work.
// BAD: starves event loop
function recursive() {
process.nextTick(recursive); // I/O never gets processed!
}
// GOOD: allows I/O between iterations
function recursive() {
setImmediate(recursive); // I/O can happen between calls
}libuv: The Engine Under the Hood
Node.js uses libuv to handle asynchronous I/O across different operating systems.
What libuv Provides
- Cross-platform async I/O (file, network, DNS)
- Event loop implementation
- Thread pool for blocking operations
- Child processes
- Timers
The Thread Pool
Some operations can't be done asynchronously at the OS level:
- File system operations (on many systems)
- DNS lookups
- Some crypto operations
- Zlib compression
These run in libuv's thread pool (default: 4 threads).
// These use the thread pool
const fs = require("fs");
const crypto = require("crypto");
fs.readFile("large.txt", callback); // Thread pool
crypto.pbkdf2("password", "salt", 100000, 64, "sha512", callback); // Thread pool
// These use OS async I/O (no thread pool)
const http = require("http");
http.get("http://example.com", callback); // OS asyncNote
Increase thread pool size for I/O-heavy apps:
UV_THREADPOOL_SIZE=16 node app.jsMaximum is 1024, but 4× CPU cores is usually optimal.
Blocking vs Non-Blocking
Understanding the difference is crucial:
Blocking Code
const fs = require("fs");
// Blocks the entire event loop!
const data = fs.readFileSync("large-file.txt");
console.log(data.length);
console.log("This waits for file read");Non-Blocking Code
const fs = require("fs");
// Event loop continues immediately
fs.readFile("large-file.txt", (err, data) => {
console.log(data.length);
});
console.log("This runs before file read completes");Real-World Impact
Blocking Destroys Performance
const http = require("http");
const fs = require("fs");
// BAD: Blocks all requests during file read
http
.createServer((req, res) => {
const data = fs.readFileSync("large.txt"); // 100ms blocking
res.end(data);
})
.listen(3000);
// With 1000 concurrent requests: 1000 × 100ms = 100 seconds total
// GOOD: Non-blocking, concurrent reads
http
.createServer((req, res) => {
fs.readFile("large.txt", (err, data) => {
res.end(data);
});
})
.listen(3000);
// With 1000 concurrent requests: ~100ms total (parallel I/O)CPU-Bound Operations
For CPU-intensive work, use worker threads:
const { Worker, isMainThread, parentPort } = require("worker_threads");
if (isMainThread) {
// Main thread: offload heavy work
const worker = new Worker(__filename);
worker.on("message", (result) => console.log("Result:", result));
worker.postMessage({ numbers: [1, 2, 3, 4, 5] });
} else {
// Worker thread: do heavy computation
parentPort.on("message", ({ numbers }) => {
const result = heavyComputation(numbers);
parentPort.postMessage(result);
});
}Summary
The Node.js platform is built on powerful foundations:
| Concept | Purpose |
|---|---|
| Reactor Pattern | Event-driven, non-blocking I/O |
| Event Loop | Single-threaded event processor with multiple phases |
| libuv | Cross-platform async I/O with thread pool fallback |
| Small Modules | Unix philosophy for composable, maintainable code |
Key takeaways:
- Event loop phases matter—
setImmediatevssetTimeoutvsprocess.nextTickhave different behaviors - Microtasks (nextTick, promises) run between phases
- Thread pool handles blocking operations—tune
UV_THREADPOOL_SIZEfor I/O-heavy apps - Never block the event loop with synchronous operations or heavy CPU work
- Worker threads are the solution for CPU-bound tasks
Note
Understanding the event loop isn't just theoretical—it's the key to writing performant Node.js code and debugging subtle timing issues.