Be aware of possible problems with SQLite on Production

Be aware of possible problems with SQLite on Production

Most people still treat SQLite as a toy. Something you slap into a script, a CLI tool, or a mobile app and forget about. I get it - that was my mental model too. Then Rails 8 shipped with SQLite as the default, and I started actually running it in production. Here's what I learned.

WAL mode, or: why the defaults will hurt you

Out of the box, SQLite locks the whole database on every write. In a web app with any concurrency at all, that's a problem. The fix is WAL - Write-Ahead Logging. With WAL enabled, readers and writers don't block each other. Writes go to a separate log file first; the main database file isn't touched until a checkpoint runs. Rails 8 enables WAL by default. Without it, you'd hit timeouts under any real load.

The performance story is also genuinely good for read-heavy apps. There's no network hop, no connection pool to manage, no protocol overhead. The data is on the same machine as your process. For most web applications - where reads outnumber writes by a large margin - this is a real advantage, not just a simplification argument.

The blue-green deployment trap

SQLite's biggest production gotcha isn't a bug in the library. It's how it interacts with modern deployment patterns.

If you're doing blue-green deploys with Docker, you'll briefly have two containers running simultaneously, both mounting the same volume, both trying to read and write the same database files. Under normal circumstances - one deploy a day, low write traffic - you'll never notice. But if you're pushing multiple times an hour and your -wal file is active during the switchover, you can lose data. SQLite's locking works at the filesystem level and breaks down when processes from different containers are competing for the same file at the wrong moment.

I've seen this happen. It's not dramatic - no crash, no error - just a few records that quietly disappear. The lesson: SQLite rewards calm, predictable deployment workflows. If your deploy process is frantic, it'll find a way to punish you.

Forensics with sqlite_sequence

When something looks off with your records, sqlite_sequence is the first place I check. It stores the highest primary key ever assigned in each table - not the current max, but the all-time high, even across deleted rows and rolled-back transactions. If your sequence counter is ahead of your actual max ID, something happened.

# Connect to the database and check the sequence counter
# against the actual maximum ID in the orders table.
def check_data_integrity
  # Raw SQL to fetch the sequence value
  query = "SELECT seq FROM sqlite_sequence WHERE name = 'orders' LIMIT 1"
  result = ActiveRecord::Base.connection.execute(query)

  last_assigned_id = result.first["seq"]
  max_actual_id = ActiveRecord::Base.connection.execute("SELECT MAX(id) FROM orders").first[0]

  if last_assigned_id > max_actual_id
    puts "Warning: Potential data loss detected. Sequence is #{last_assigned_id} but max ID is #{max_actual_id}."
  else
    puts "Integrity check passed. Everything looks solid."
  end
end

This won't tell you what happened, but it confirms something did. From there you can dig into your WAL history or deployment logs.

No ILIKE, and JSON types that will bite you

Coming from Postgres, two things catch people off guard.

First: SQLite has no ILIKE. For case-insensitive search you need LOWER(name) LIKE LOWER(...). It's more verbose and slightly slower, but it works.

Second: JSON extraction returns native types. If json_extract pulls out a number, you get an integer, not a string. Comparing that to a string in Ruby - which Postgres would coerce silently - produces nil or false in ways that are annoying to track down.

# Example of handling case-insensitive search and JSON extraction
# in a world without ILIKE and with strict JSON types.
def search_in_products(term)
  # Using raw SQL to handle the lack of ILIKE
  # and casting JSON extract to TEXT for consistent comparison
  raw_query = <<-SQL
    SELECT * FROM products 
    WHERE LOWER(name) LIKE LOWER('%#{term}%')
    OR CAST(json_extract(metadata, '$.category_id') AS TEXT) = '10'
  SQL

  Product.find_by_sql(raw_query)
end

Neither of these is a dealbreaker. They're just things to know before you spend an afternoon confused.

What you actually gain

A backup is cp db/production.sqlite3 backup.sqlite3 - though do it via sqlite3 db/production.sqlite3 ".backup backup.sqlite3" while the app is running, so WAL writes are handled correctly. There's no database server to update, no version mismatch to worry about, no connection pooling to tune.

If the app ever needs to scale horizontally across multiple machines, switching to Postgres takes a few hours. Until that point, SQLite removes an entire category of infrastructure problems from your life. For most projects, that's the right trade.

Summary

I'd used SQLite in tests and development plenty of times, but running it in production was different. It works. It's fast. And the failure modes, when they exist, are specific and avoidable - not mysterious. For smaller projects, I'd reach for it without hesitation now.

Happy SQLite-ing!