7 Most Common Node.js Mistakes That Destroy App Performance
Last Tuesday, I was sitting with a junior developer who couldn’t figure out why their API was timing out. We spent maybe 20 minutes digging through the code, and then I saw it they were reading a 50MB JSON file synchronously inside a request handler.
One sync call. That was literally killing their entire app. That moment stuck with me because it’s not even close to being the worst thing I’ve seen. Over the past few years, I’ve looked at somewhere around 200 Node.js projects, and honestly, most of them have similar issues.
Not because the developers are bad, but because nobody really talks about this stuff in a straightforward way. So I’m going to walk you through the problems I keep running into, what they actually do to your app, and how to fix them without tearing everything apart.
1. Reading Files Synchronously Right in Your Requests
This one kills me because it’s so easy to do and so deadly to your application. I was working with a startup that had a user dashboard. Super simple fetch user profile from a JSON file and return it. They built it like this:
app.get('/user/:id', (req, res) => {
const userData = fs.readFileSync(`./users/${req.params.id}.json`);
res.json(JSON.parse(userData));
});
When I saw this, I actually laughed. Not because it was stupid, but because it was so blatantly slow and they had no idea. Here’s the thing about Node.js when you use readFileSync, you’re basically telling the entire server to just… stop. Wait. Do nothing. Just hang out while the disk reads that file.
You’ve got one user hitting your endpoint? Fine, nobody notices. You’ve got 10 users hitting it at the same time? Your server is now slower than a dial-up connection from 2003. Twenty users? You might as well just take the server offline.
The fix is stupid simple. Use async:
app.get('/user/:id', async (req, res) => {
const userData = await fs.promises.readFile(`./users/${req.params.id}.json`);
res.json(JSON.parse(userData));
});
That’s it. Now when one request is waiting for the file, the server can handle 50 other requests. I’ve seen this one change alone take apps from totally broken under load to actually usable. We’re talking 60-80% faster.
2. Opening a New Database Connection for Every Single Request
I was consulting with a company that had a Postgres database. Their traffic was growing, and things were getting slower. So we looked at their code, and this is what I found:
const { Client } = require('pg');
app.get('/data', async (req, res) => {
const client = new Client(dbConfig);
await client.connect();
const result = await client.query('SELECT * FROM users');
await client.close();
res.json(result.rows);
});
Do you know what’s expensive? Creating a new database connection. Do you know what they were doing on every single request? Yeah.
When you’ve got 100 people using your app at the same time, you’re creating and destroying 100 database connections. You’re doing the handshake, the authentication, the negotiation all of that overhead 100 times. Meanwhile, the database is sweating trying to handle all these connections.
Connection pooling is the fix. You create a pool once, and then you just grab connections from the pool whenever you need them. They’re already open, already authenticated, already ready to go:
const { Pool } = require('pg');
const pool = new Pool({
max: 20,
idleTimeoutMillis: 30000,
});
app.get('/data', async (req, res) => {
const result = await pool.query('SELECT * FROM users');
res.json(result.rows);
});
I set it up for a client once, and their database CPU went from constantly spiking to actually stable. Same traffic, but the pooling just made it all so much smoother.
3. Connections That Never Close and Event Listeners That Stick Around Forever
This one’s sneaky because it doesn’t blow up your app immediately. It just slowly kills it.
I was brought in to debug an app that kept crashing after running for a few days. Memory was climbing and climbing until it just ran out. We dug through the code and found stuff like this:
dataStream.on('data', (chunk) => {
processData(chunk);
});
// Later...
eventEmitter.on('update', () => {
doSomething();
});
The problem? They never removed these listeners. So every time the app restarted the service or reconnected to something, more and more listeners would pile up. It’s like a memory leak but for event handlers.
After a week, you’d have thousands of listeners all firing at once. Memory would be gone, and the app would crash.
The fix is boring but necessary:
dataStream.once('data', (chunk) => {
processData(chunk);
});
// Or explicitly clean up
const handler = () => {
doSomething();
};
eventEmitter.on('update', handler);
// Later
eventEmitter.off('update', handler);
And always, always close your connections properly:
try {
const connection = await db.connect();
const result = await connection.query(sql);
return result;
} finally {
await connection.close();
}
I started using heap snapshots to catch this stuff early. Takes maybe 30 minutes to set up, and you’ll instantly see what’s leaking.
4. N+1 Queries The Database Killer
This is the one that makes me shake my head because everyone does it at least once.
I was looking at an e-commerce platform. They wanted to show a list of orders with the customer details for each order. Here’s what they did:
const orders = await Order.find({});
for (const order of orders) {
order.customer = await Customer.findById(order.customerId);
}
res.json(orders);
Looks reasonable, right? You get all the orders, then you fill in the customer details. Simple logic. Except…
If you’ve got 500 orders, you just ran 501 database queries. 501. Your database is gasping for air. The response time is measured in seconds. Users are sitting there waiting.
The solution is to load the data all at once:
const orders = await Order.find({}).populate('customerId');
res.json(orders);
Or if your database doesn’t support that, batch it:
const orders = await Order.find({});
const customerIds = orders.map(o => o.customerId);
const customers = await Customer.find({ _id: { $in: customerIds } });
const customerMap = new Map(customers.map(c => [c._id.toString(), c]));
orders.forEach(o => {
o.customer = customerMap.get(o.customerId.toString());
});
res.json(orders);
One client had this issue with a reporting page. Switching from 5,000 queries to 3 queries brought the page load time from 45 seconds down to 2 seconds. Not incremental improvement just absolute night and day.
5. Loading Massive Amounts of Data Into Memory at Once
I worked with a team that was trying to export a year’s worth of transaction data. They wrote code like this:
const allTransactions = await Transaction.find({});
allTransactions.forEach(transaction => {
processTransaction(transaction);
});
On their small test database, it was fine. But on production with millions of records? The server would run out of memory and crash. Every single time.
The problem is they’re trying to load a gigabyte of data into RAM all at once. That’s insane. Your computer doesn’t have that much memory. Even if it did, the garbage collector would have a meltdown trying to manage it.
The right way is to process it in chunks:
const pageSize = 1000;
let page = 0;
while (true) {
const transactions = await Transaction.find({})
.skip(page * pageSize)
.limit(pageSize);
if (transactions.length === 0) break;
for (const transaction of transactions) {
await processTransaction(transaction);
}
page++;
}
Now you’re only loading 1,000 records into memory at a time. Process them, throw them away, move on to the next batch. It’s slower than loading everything at once (obviously), but it actually completes instead of crashing.
For really big operations, use streams. That’s what they’re for.
6. Recalculating the Same Thing Every Single Request
I was working with a SaaS company that had a dashboard. Every time someone loaded the page, the backend was recalculating stats from millions of database records. This calculation took 45 seconds.
They had maybe 20 people using the dashboard per day. So 20 times a day, the system would spend 45 seconds churning through data. Just waste. Complete waste.
I asked them: does this data need to be live? Do you need it updated every single second?
They said no. Once an hour is fine.
So I added caching:
let cachedStats = null;
let cacheTime = null;
app.get('/stats', async (req, res) => {
const now = Date.now();
if (cachedStats && (now - cacheTime) < 60 * 60 * 1000) {
return res.json(cachedStats);
}
cachedStats = await calculateStats();
cacheTime = now;
res.json(cachedStats);
});
First request takes 45 seconds. Every request after that is instant. They love it.
For bigger systems, I use Redis. Same idea, but it survives app restarts and you can share it across multiple servers.
7. Running Heavy Calculations on the Main Thread
I sat down with a developer who was building an image processing service. Every time someone uploaded an image, the server would resize it, apply filters, generate thumbnails all right in the request handler.
After processing maybe 3 images, the entire server would become unresponsive. Not just slow. Completely frozen. You couldn’t even hit a health check endpoint.
Here’s why: while the server is processing an image, it can’t do anything else. Not handle other requests, not check logs, nothing. The event loop is stuck.
The fix is to use worker threads:
const { Worker } = require('worker_threads');
app.post('/process-image', (req, res) => {
const worker = new Worker('./imageWorker.js');
worker.on('message', (result) => {
res.json(result);
worker.terminate();
});
worker.postMessage(req.body.imageData);
});
Now the heavy work happens in the background, and your main server stays responsive. It’s async, it’s clean, and your server doesn’t freeze up.
I implemented this for that client, and suddenly they could process 5 images at once without the server struggling. It was a game changer.
What I Actually Do When I Audit Code
Honestly, my process is pretty simple. I look for these patterns, I run load tests, and I see where things break. The first thing I always recommend is turning async operations into actual async code. That usually gets you 50% improvement right away.
Then I profile the database. Are you doing N+1 queries? Are your connections properly pooled? Fix those, and you’re another 40% faster. Then I look for memory leaks and cache opportunities. Sounds simple, and it kind of is.
But most projects have at least 3-4 of these issues, and they all compound. Fix them all, and you’re looking at apps that are 10 times faster.
The Real Talk
Your app isn’t slow because of Node.js. Node.js is actually great at what it does. Your app is slow because these patterns are easy to miss when you’re focused on getting things done. Start looking for these problems in your code. Load test under real-world traffic. Profile your database. Check your memory usage over time. Do that, and I’m telling you, you’ll be surprised at what you find.
