Quick Summary

This insight covers how Bacancy built a high-performance multi-tenant SaaS platform on Rails 8 for a US logistics operator, from the tenant isolation decision and Solid Trifecta architecture to carrier-level data safety, deployment strategy, and the production outcomes at launch.

Introduction

The client walked in with a product that worked for one carrier and a contract that required it to work for forty by the end of the quarter. Their existing Rails app was single-tenant. Every new fleet operator they signed was getting a hand-configured instance, a separate database, and a support rotation nobody had the headcount to sustain.

Dispatchers were waiting six seconds for shipment dashboards to load at peak. GPS pings from the driver app were being dropped by the background processor under load. One missed dispatch event had already cost them a mid-size 3PL renewal.

They came to us with a single brief. Turn this into a real multi-tenant SaaS platform on a modern stack, and do it without spinning up Kubernetes for a team of eight engineers.

Our Rails 8 build plan started with one architectural question that reshaped everything else. How do you serve hundreds of fleet tenants from one codebase without ever leaking a shipment between them? The answer ran through Rails 8’s Solid Trifecta, row-level tenancy, and a deployment story that looked nothing like Rails 7. This insight is the real engineering story behind that multi-tenant SaaS platform, with the client anonymized by agreement.

The Logistics SaaS Problem We Were Asked to Solve

The pain was concrete. Their existing app served one carrier. Now they needed to serve many, each with its own drivers, rate cards, routes, and customer portals, all under one platform.

Logistics adds pressure that generic SaaS doesn’t. The US logistics industry is valued at over $1.6 trillion, contributing roughly 8% to national GDP, and the operators moving inside that footprint don’t tolerate sloppy data boundaries. Carrier A’s rate sheets cannot surface to Carrier B. Shipment GPS pings have to flow in continuously without clogging the request queue.

Dashboards need to stay under a second even when a carrier uploads ten thousand shipments in a batch. On top of that, the client had a SOC 2 Type II audit on the roadmap.

That meant tenant isolation had to be defensible on paper, not just working in practice. Every choice in the multi-tenant SaaS platform architecture that followed was shaped by that context, and every outcome we report at the end traces back to a decision made in this phase of the multi-tenant SaaS platform design.

The First Call in Any Multi-Tenant SaaS Platform Build: Picking the Tenant Isolation Model

Three models were on the table. Shared schema with a tenant_id column on every row. Schema-per-tenant using PostgreSQL schemas through a gem like ros-apartment. Database-per-tenant, one physical database per carrier.

Each has real trade-offs, and picking the wrong one early is a six-month rebuild most teams can’t afford. The right one for this build wasn’t obvious.

Why We Ruled Out Database-Per-Tenant

Database-per-tenant gives the strongest isolation and is common in heavily regulated verticals. But it scales linearly in cost and operational overhead. At 300-plus carriers, the maintenance burden would have eaten our timeline. Backups, migrations, and monitoring all multiply.

Why Schema-Per-Tenant Lost the Argument

Schema-per-tenant was tempting because it reads clean and each tenant feels contained. The problem showed up in cross-tenant reporting, which the client’s admin team needed daily. Once schemas multiply, cross-tenant queries get painful. Migrations turn into a parallel-execution puzzle.

What We Picked and Why

We went with row-level tenancy, a tenant_id column scoped through CurrentAttributes. For this multi-tenant SaaS platform, the reasoning was practical. It scales to thousands of carriers on a single PostgreSQL cluster. It keeps migrations simple. It lets the admin dashboard run cross-tenant queries without juggling connections.

The trade-off is that a missed scope means a data leak. We paired it with automated linter rules that block Model.unscoped without an explicit reviewer annotation. Subdomain routing handled tenant resolution at the edge. acme.client.com mapped to the Acme carrier through a before-action in ApplicationController. Clean, fast, and the pattern most logistics buyers already recognize from other SaaS tools they use.

That decision alone removed weeks from the multi-tenant SaaS platform timeline.

Planning a similar multi-tenant build and not sure which isolation model fits your scale?

Hire Ruby on Rails Developers from Bacancy to build your multi-tenant SaaS platform with the right architecture from day one.

How the Rails 8 Solid Trifecta Reshaped Our Logistics Infrastructure

This is where Rails 8 specifically earned its place on the multi-tenant SaaS platform build. The piece that mattered most was the Solid Trifecta. Solid Queue, Solid Cache, and Solid Cable let us consolidate the whole platform on a single PostgreSQL 16 cluster. No Redis, no Sidekiq, no external pub/sub.

For a logistics workload that ingests GPS data continuously, that consolidation wasn’t cosmetic. It removed three moving parts from the production stack.

Solid Queue for GPS Ingest and Route Optimization Jobs

The client’s fleet was pushing telemetry at roughly six pings per minute per active shipment, which aggregated to thousands of events per second across the full fleet during peak hours. Route optimization jobs, shipment status recalculations, and carrier invoice generation all needed a background processor that wouldn’t buckle.

Solid Queue uses FOR UPDATE SKIP LOCKED on PostgreSQL 9.5+, which avoids the lock contention older Sidekiq deployments sometimes see at high dispatch rates. We ran the Solid Queue supervisor as a dedicated bin/jobs process under Kamal rather than embedding it in Puma, because GPS ingest at that volume competes with web request resources if you share them. The Puma plugin is a clean option for lighter workloads, but for this one the separate process was the right call.

Concurrency controls mattered for logistics specifically. Solid Queue lets you scope a concurrency key by job arguments, so we keyed route-optimization jobs by tenant ID. One carrier’s bulk import couldn’t monopolize the worker pool and starve another tenant’s real-time dispatches.

Solid Cache for Carrier Rate Lookups and Dashboard Responses

Rate lookups are the hot path in logistics software. Every shipment quote hits dozens of carrier rate rules, fuel surcharge tables, and accessorial fees.

We cached the compiled rate engine output in Solid Cache with a tenant-scoped key pattern and a 15-minute TTL. Dashboard fragment caching handled the second hot path, the ops view that shows live shipment counts, delayed loads, and carrier scorecards.

Solid Cache writes to SSD-backed database storage rather than RAM. Per read, Redis is faster. But the reason Rails 8 defaults to Solid Cache is that disk lets you keep a much larger cache for the same cost, which lifts the hit rate and dominates the economics on read-heavy workloads. For our rate-lookup traffic, a higher hit rate against a larger cache outperformed a smaller Redis cache with more misses.

What we cared about for this multi-tenant SaaS platform was keeping one cache, one backup story, one compliance scope. That was a real win in production, not a theoretical one.

Solid Cable for Real-Time Shipment Tracking

Live shipment tracking is table stakes for logistics buyers in 2026. Drivers’ mobile apps push location updates, dispatch screens redraw in near-real-time, and shippers expect the same visibility through their customer portal.

Solid Cable handled the WebSocket layer through PostgreSQL with a default 100ms polling interval. We wired tenant-aware connections into Action Cable using reject_unauthorized_connection so a dispatch operator at Carrier A physically cannot subscribe to Carrier B’s shipment stream, even with a crafted request.

The rejection happens in ApplicationCable::Connection#connect before any channel logic runs. That defensive posture was one of the items our client specifically wanted to point to during the SOC 2 readiness review.

Across the three, we eliminated Redis as a dependency. One PostgreSQL cluster, one backup story, one set of credentials to rotate.

Keeping Tenant Data Isolated Across Requests and WebSocket Connections

Row-level tenancy is only as strong as the scoping layer that enforces it. Our multi-tenant SaaS platform implementation used ActiveSupport::CurrentAttributes to hold the current tenant for every request.

A before-action in ApplicationController resolved the tenant from the subdomain, set Current.tenant, and every Active Record default scope filtered by Current.tenant.id. Thread-isolated by design, reset automatically before and after each request.

The harder problem was WebSocket connections. Action Cable lives outside the normal request cycle, so tenant context has to be established at connection time.

We overrode ApplicationCable::Connection#connect to resolve the tenant from the WebSocket request’s subdomain. If the tenant didn’t exist, or if the authenticated user didn’t belong to it, we called reject_unauthorized_connection immediately. Inside each channel, we passed the tenant ID into the stream name, so broadcasts from one tenant could never cross-pollute another.

Background jobs posed the third risk in this multi-tenant SaaS platform. Solid Queue jobs run outside any request, so we serialized the tenant ID into job arguments and restored Current.tenant inside the job’s perform method before any work touched the database.

Linter rules flagged any job class that didn’t declare a tenant argument. The combined result: a request path, a WebSocket path, and a job path, all three with tenant context provably set before any query ran.

Mapping the Architecture to SOC 2 Type II Controls

The client mentioned a SOC 2 Type II audit on the roadmap during the kickoff. That shaped which architectural choices we treated as defensible-by-design rather than “we’ll harden it later.” Three decisions in the multi-tenant SaaS platform mapped directly to SOC 2 controls:

  • Tenant scoping enforced at the framework level. Every request, WebSocket connection, and background job sets Current.tenant before any database query runs. The linter rules that block Model.unscoped without a reviewer annotation produce an audit trail showing tenant scoping is not a developer’s good intention but a code-enforced rule.
  • Action Cable rejection at connection time. reject_unauthorized_connection runs in ApplicationCable::Connection#connect before any channel logic. A dispatch operator at Carrier A cannot subscribe to Carrier B’s shipment stream even with a crafted request. That gives the auditor a single line of code to point to.
  • Per-tenant observability. Every Prometheus metric carries a tenant_id label. When the audit team asks who can see what, the answer is in the dashboards, not in a Word doc.

None of this guarantees a passing audit on its own. That work belongs to the client’s compliance team. But it gives them defensible, code-level evidence to point to, which shortens the audit cycle.

Deploying at Scale with Kamal 2 and Per-Tenant Observability

Rails 8 ships with Kamal 2 as the default deployment tool, and for this build it replaced what would have been a Kubernetes cluster plus a Helm chart for a team this size. Kamal handled Docker builds, TLS through Let’s Encrypt, rolling deploys, and zero-downtime rollbacks.

We ran canary releases by tagging a small group of internal test carriers and routing them to the new container first. Error rates and p99 latency got ten minutes of review before we rolled the change forward.

Why Aggregate Metrics Lie in Multi-Tenant Systems

Observability was the part most multi-tenant SaaS platform guides skip. Aggregate metrics lie in multi-tenant systems. If Carrier A is seeing 3-second dashboard loads but the platform-wide p95 sits at 400ms, the aggregate hides the problem. One noisy tenant can drag down the entire user experience while your dashboards say everything is fine.

How Tenant-Tagged Telemetry Changed the Ops Workflow

We instrumented every custom metric with a tenant_id label in Prometheus. Grafana dashboards could then surface a single noisy tenant without us waiting for a support ticket. For the client’s customer success team, this turned into a proactive tool. They could spot a carrier whose shipment volume was growing past their tier before performance complaints arrived.

The deployment frequency changed as much as anything else for this multi-tenant SaaS platform. Canary rollouts, a single database server to back up, and Kamal’s idempotent command model meant we moved from weekly deploys at the start of the build to daily deploys by launch. Small, reversible changes became the norm.

Conclusion

Rails 8 reset the math for any multi-tenant SaaS platform serving logistics. Solid Trifecta removed an entire layer of infrastructure. Row-level tenancy with CurrentAttributes gave the client a defensible isolation story without operational bloat. Kamal 2 turned deployment into a solved problem instead of a standing agenda item.

None of these pieces existed as defaults before Rails 8. Together they changed what a small engineering team could credibly ship. Our multi-tenant SaaS platform outcomes for this build came out as follows:

  • Eliminated Redis across caching, queues, and WebSockets, consolidating the data layer onto a single PostgreSQL cluster
  • Cut p95 shipment dashboard latency from roughly 1.8 seconds to under 400 milliseconds under peak load
  • Reduced new carrier tenant onboarding from a three-day provisioning process to same-day through subdomain automation
  • Moved the team from weekly to daily deploys using Kamal 2 canary rollouts, with rollback under two minutes

With Rails 8.1 now shipping and Rails 8.2 previews already landing features like type-safe JSON attributes and CombinedConfiguration for tenant overrides, the platform is only going to get sharper for this use case.

For logistics operators sizing up a similar multi-tenant SaaS platform build, the right starting point is a structured architecture engagement with a team that has shipped Rails 8 work at production scale. Bacancy’s Ruby on Rails Development Company services are structured to begin exactly there. Share your tenant count, your expected shipment volume per day, and your compliance roadmap, and a good team should be able to draft a realistic architecture response inside a week.

Frequently Asked Questions (FAQs)

For most read-heavy multi-tenant SaaS platform workloads, yes. Redis is still better for sub-millisecond reads at very high QPS or thousands of concurrent WebSockets. For everything else, Solid Trifecta on PostgreSQL works.

It depends on tenant count, compliance posture, and reporting needs. Row-level tenancy with a tenant_id column scales to thousands of tenants on one cluster and supports cross-tenant queries cleanly, which is why we chose it for this build. Schema-per-tenant works for tens to low hundreds of tenants but breaks down on cross-tenant reporting. Database-per-tenant gives the strongest isolation and is the right call for heavily regulated verticals where one tenant’s data physically cannot share infrastructure with another’s.

Kamal 2 for teams of 5 to 15 engineers up to a few thousand tenants. Kubernetes when you need a service mesh, custom operators, or cross-cloud portability.

Resolve the tenant in ApplicationCable::Connection#connect and call reject_unauthorized_connection if the user does not belong to it. Pass the tenant ID into the channel stream name so broadcasts cannot cross tenants in the multi-tenant SaaS platform.

Build Your Agile Team

Hire Skilled Developer From Us