Early in my career, my tech lead sat me down and made me learn CLI vim. Not a GUI editor, not VS Code. Vim, in the terminal. At the time it felt like hazing, but we were working on raw Kafka streams and regularly needed to debug directly on the server. There was no VS Code SSH extension to fall back on. It was vim or nothing. He was that kind of engineer: opinionated, methodical, but always opinionated for a reason. He was also a staunch advocate of using logging over print. Not sometimes. Always.
I didn’t really question it. He said logger, so I used logger. It felt like one of those things you just absorb from someone more experienced and carry forward without fully understanding why.
Years later, I was pressing the same preference on people I worked with, and someone pushed back. “Why logger? Print works fine. What’s the actual difference?” I realized I didn’t have a real answer. I had conviction without evidence. So I decided to actually find out.
So I benchmarked it
There’s a line of code almost every developer has written at some point in a production service:
print(f"Processing request: {request_id}")
It feels fine. The logs show up. The service works. But I wanted to know what was actually happening under the hood, so I wrote a small benchmark: 100,000 iterations each of print, a suppressed logger, and an active logger writing to /dev/null. I measured both time and memory using Python’s built-in tracemalloc.
Here’s what came back:
Scenario Time/call Peak RAM
----------------------------------------------------------------
print (buffered) ~1,800 ns 5.6 MB
logger.debug (suppressed) ~1,357 ns 0.003 MB
logger.debug (active) ~20,558 ns 0.087 MB
The suppressed logger, one where the log level is set above DEBUG so the message never actually gets emitted, is faster than print and uses essentially no memory. That surprised me.
The buffered print held 5.6 MB in memory across those 100k calls. That’s the entire accumulated string output sitting in a StringIO buffer. In a real service with high throughput, that adds up.
The memory isn’t even the real problem
The numbers are interesting, but they’re not what actually matters in production.
print gives you nothing for free. No timestamps. No severity levels. No caller context. No way to flip verbosity without redeploying. When something goes wrong at 2 AM and you’re combing through logs in Datadog or Splunk, print output is just a wall of text with no structure to filter against.
With a logger, you get all of that by default. And when an incident hits and you need more detail, you can flip the log level to DEBUG at runtime. No deploy, no restart. That alone has saved me hours.
# print: you get this
Processing request: abc-123
# logger: you get this
2026-04-18 14:32:01,204 DEBUG myapp.api Processing request: abc-123
When print is actually fine
If you’re writing a short script, a one-off data migration, or a Lambda function where stdout gets ingested raw by CloudWatch anyway, print is fine. The complexity of a logger setup isn’t worth it.
But even then, a logger with a single StreamHandler does the exact same thing and costs you maybe five lines of setup.
What I actually run with
Over time I stopped copy-pasting logging.basicConfig into every project and built a custom logger class that I reuse across services. It wraps the standard library with sensible defaults, but I can swap in different formatters, handlers, or log levels depending on the application. A Lambda shipping JSON to CloudWatch doesn’t need the same setup as a long-running Kafka consumer writing structured logs to Splunk. The base stays the same, the configuration adapts.
import logging
# at its simplest, the core still looks like this
logging.basicConfig(
level=logging.INFO,
format="%(asctime)s %(levelname)s %(name)s %(message)s",
)
logger = logging.getLogger(__name__)
The snippet above is the foundation, but the real value is in having a reusable class where handler logic, formatting, and level configuration are already solved for the environments you actually deploy to. Set it up once, carry it everywhere.
Turns out my tech lead was right. I just needed someone to challenge me on it before I actually understood why.