logo
Inery

2 months ago

Optimistic vs. Pessimistic Locking: Differences, Best Use Cases

article_image

See more news

news_image
Inery's Ambassador Program Gets Even Better with Ambassador 2.0
news_image
Inery Has A Testnet – Here's What It's All About

In applications that serve more than one user, collision scenarios are practically unavoidable. A database needs a way to maintain data consistency when more than one user is trying to change data in the same field.

This is where optimistic and pessimistic locking come in.

Pessimistic and optimistic locking (or pessimistic and optimistic concurrency) handles this issue in different ways. As such, their effectiveness varies from use case to use case.

Here, we’ll walk you through how both of these techniques work, as well as where you should apply them.

Optimistic vs. Pessimistic Locking: What’s the Difference?

Before getting into use cases, let’s pin down the difference between these two concurrency control methods.

Optimistic Locking

An optimistic lock allows more than one user to write data in the contested field at the same time. 

So, let’s say we have two users trying to change a field; let’s call them user1 and user2. In that case, neither of the two would be blocked from trying to modify the field. The optimistic protocol compares their change attempts and decides which to validate.

How does this validation work? A column with a timestamp, date, hash/checksum, or version number is attached to the record—we’ll take the version number for our example. The system checks if the version number is the same at the end of user1’s transaction as it was at the beginning. 

From there, one of two things happens:

  • The versions match: the write is committed, and the version number is changed.

  • The versions don’t match: the write attempt is rolled back, and the user has to try again

Let’s assume user1 wants to change a record with a version 1 attribute. If the versions before and after the write match—so they’re both version 1—user1’s write attempt succeeds uncontested.

But if user2 manages to complete their transaction before user1’s could finish, the record attribute becomes version 2. Therefore, the version numbers at the start and finish of user1’s write are different, invalidating the change. User1’s work is undone, and they can give it another crack.

Pessimistic Locking

A database with pessimistic locking will block a record while it’s being updated. Unlike its optimistic counterpart, it doesn’t allow more than one write simultaneously. All updates are serialized and committed in proper order.

However, the “severity” of the block in pessimistic locking varies. Broadly, we recognize two types of pessimistic lock:

  1. Exclusive (write): no one but the user may read or update data

  2. Shared (read): other users can read the data but cannot update it

Carrying on with the user1-user2 saga, imagine user1 is changing data in a field. While that transaction is happening, user2 is blocked from making changes. Depending on the lock type, they might be able to read the data, but they can’t update until user1 is done. The lock lifts only after user1’s write is committed, after which user2 may proceed.

Pessimistic Locking Vs. Optimistic Locking: Best Use Cases

With that out of the way, let’s compare use cases most compatible for optimistic lock and pessimistic lock solutions.

When Optimistic Locking Works

The assumption with an optimistic system is that collisions will be rare. There will be enough resources to deal with them, and the traffic level won’t cause bottlenecks. Therefore, the system doesn’t need to enforce synchronization to prevent interference.

Since it allows for more than one user to alter a field at a time, optimistic locks are useful in scaling applications without performance dips or deadlocks. Beyond scaling, this locking approach does the trick in applications that tolerate uncommitted (dirty) reads, since the application can get fresh data with a quick reload.

Overall, optimistic concurrency control shines when:

  • There are few data conflicts (large tables without frequent updates would be ideal)

  • The application is scaling

  • The system can accept dirty reads

  • Connection to the database isn’t maintained all the time (e.g., three-tier architecture)

  • The application does more reads than writes

  • Little overhead is necessary

When It Doesn’t Work

Optimistic locking has its limits, too. For one, the whole process can cost more resources than you’re comfortable with, especially if it happens a lot. This is why it’s preferable in systems where restarting writes isn't such a big deal.

In situations where you rely on exact data operations (for instance, most finance-related transactions), optimistic locking could cause problems.

When Pessimistic Locking Works

Pessimistic locks lend themselves to environments where data integrity matters more than operation speed. It’s all about getting accurate reads, regardless of whether updates need to be delayed as a cost.

But delays don’t need to take a long time. Updates in pessimistic systems should be as brief as possible to not hold up the line.

The pessimistic option also comes in handy when conflicts are frequent. It doesn’t roll back write attempts, so transaction costs are lower. Where there are few users and many records across many tables (a.k.a. hotspots), pessimistic locks go a long way.

To sum up, pessimistic locks are optimal when:

  • You need a high degree of data integrity

  • There aren’t high demands for scaling

  • Data conflicts happen a lot

  • Updates don’t have to be immediate

  • Your database has plenty of small tables that update often

When It Doesn’t Work

Most of the issues with pessimistic locks arise from the locking itself. For instance, pessimistic locking doesn’t perform well if the lock duration is high, as that can cause a nosedive performance. It’s also not a recommended solution to deadlock scenarios.

Pessimistic concurrence does poorly whenever scaling is involved. The constant locking and unlocking would hamper fetch speed for the growing number of users. It also fails to meet the flexibility needs of an upscaling application, especially if you have limited lock order options.

Pessimistic Lock Vs. Optimistic Lock: Is It Either One or the Other?

Developers may feel tempted to pick either optimistic or pessimistic locking and use one of the two across the application. Though that may simplify development, the reality is that different tables/objects need different access requirements.

Rather than committing to one side or the other, it’s best to switch strategies based on the application level. The point is that we approach the optimistic-vs-pessimistic-locking question as a binary when we shouldn’t. Both have viable use cases, and neither is obsolete. 

logo
Inery

7 months ago

Decoding Data Mining: What You Should Know and How to Stay Clear

Explore the world of data mining, its impact on your privacy, and how to shield your data from unwanted scrutiny in this comprehensive guide. ...READ MORE

artilce_image

Share

logo
Inery

1 year ago

Our Vision for Metaverse: To Connect a Disconnected World

Changing the dynamics of metaverse from a platform-specific approach to the interoperability of content between different platforms. ...READ MORE

artilce_image

Share

logo
Inery

4 months ago

Can You Trust the Cloud?

From enhancing security measures to optimizing scalability and performance, discover how Inery's innovative solutions redefine the reliability of cloud data management. ...READ MORE

artilce_image

Share

logo
Inery

1 year ago

Developing Web3 - Why Manpower Is At The Top Of The List Of Priorities

With the interest and value of Web3 expanding, talented blockchain developers are in great demand. However, Web3 recruiters and companies are met with a unique set of challenges. ...READ MORE

artilce_image

Share

bgbg