From the start of my career, I’ve always been deep into Oracle Database, working on versions from 8i to 12c, all focused on online transaction processing (OLTP). Oracle is rock solid—safe, reliable, and, as Ask Tom will tell you, simply great. There’s even a saying in the industry: “You’ve never gotten in trouble by buying Oracle.” And for the longest time, that was my truth.
In the OLTP world, you deal with loads that demand low latency and high transaction rates. Whenever a performance issue popped up, solving it wasn’t too complicated. You either tackled it the smart way—through optimization—or the money way, which often meant throwing more hardware at it. Either way, I was always confident I could find a solution.
But then, I landed on a project that immediately put me out of my comfort zone. The requirements were straightforward—10,000 transactions per second—but there was a twist: PostgreSQL 11.
My first thought? Oh, my gosh, what have I gotten into? Until then, I’d never encountered PostgreSQL in any real production environment, let alone in a high-concurrency, high-load scenario. For me, PostgreSQL was always the “toy” database, something you’d use for small side projects or non-critical workloads. And now, here it was, powering a project with serious transactional demands.
First Impressions: A Throwback to the Past 🔗
Diving into PostgreSQL documentation was like a trip back to the late ’90s. Remember the old MSDN CDs for Win32 development? They had everything you needed, but you had to dig deep because the information was scattered across hundreds of pages. You’d read the docs inside and out, slowly developing a kind of telepathy to piece it all together. Well, that’s how PostgreSQL felt to me at first.
Still, I soldiered on, extracting nuggets of knowledge from the docs. Here are a few things that stood out (and surprised me):
-
Updates are actually deletes and inserts. This was a bit of a shock. I mean, technically, it makes sense given the MVCC architecture, but it raised concerns for me about performance and potential bloat.
-
BLOBs are stored in a separate table—TOAST. I’ll admit, this took me a while to wrap my head around. In Oracle, you just store large objects like CLOBs and BLOBs in-line or out-of-line. PostgreSQL’s approach felt a bit more, shall we say, fragmented.
-
Partitioning tables is… complicated. PostgreSQL offers multiple ways to partition tables, but the consequences of each method aren’t always clear. There was no “one-size-fits-all” answer, and I realized this was going to require careful planning.
-
Clustering is not straightforward either. It felt like PostgreSQL offered too many options, and deciding on the right cluster setup was daunting. It’s powerful, sure, but not exactly user-friendly.
-
Manual read load distribution. This one really threw me. In Oracle, read operations can be automatically balanced across standby nodes. PostgreSQL, on the other hand, makes you handle this manually. Essentially, it’s DIY load balancing!
A New Challenge 🔗
All of these discoveries made me realize I was in for a serious challenge. PostgreSQL was no longer the toy database I had assumed it to be. It’s powerful, but it demands that you know it inside and out to get the most out of it—much like Oracle, but in its own unique, sometimes quirky way.
So, while it wasn’t what I was used to, I found myself intrigued. There’s something exciting about working with a new tool that forces you to rethink everything you know. And that’s where I am now—learning, testing, and building a new skill set around PostgreSQL.
Stay tuned! This journey is just beginning.