Add Mutli-Tenant Feature #11671
Replies: 5 comments 9 replies
-
|
you can have multiple databases in one database server by mapping site name to database name. This way no unnecessary schema level changes need to be done and data will also be protected which can otherwise be exposed by some wrong query or missed permissions |
Beta Was this translation helpful? Give feedback.
-
|
I tried to do this a short while ago. There doesn't appear to be any way with existing features to accomplish this. My workaround so far has been to implement each of my tenants as a sales channel and apply middleware to restrict / apply sales_channel_id filtering on many features. This has been pretty successful thus far. |
Beta Was this translation helpful? Give feedback.
-
You can create a a store and use module link to link it with partner or your desired namespace and keep using that everywhere the same reference! You might need to develop your API route like say partner which can handle all the required isolation. It’s doable not complex but it’s a choice of design ! Store is associated with location and currency which doesn’t need isolation as such , products are connected with channels which are connected with stores and each public key is associated to store which creates the isolation per store level. |
Beta Was this translation helpful? Give feedback.
-
|
Hey @BjornTheProgrammer, hope this helps a bit in topic We (Rigby) have had many conversations about multi-tenancy and multi-store setups in Medusa and decided to do a deep dive as eBook: Multi-Tenant Architecture in Medusa, covering the different technical approaches and trade-offs. You can also read a summary on the official Medusa blog: Building a multi-tenant commerce platform with Medusa, written by our CTO. |
Beta Was this translation helpful? Give feedback.
-
|
Hey everyone, been going pretty deep into this topic recently. Went through the Rigby approach, spent a lot of time reading through the Medusa internals, and honestly the more I dig the more I see this is way harder than just adding a tenant_id on every table. Here's what I found: The data layer part is not that hard actually. Adding a column, fixing unique indexes (for example the customer email constraint is a composite on (email, has_account) and it's global — so two tenants can't even have the same customer email), setting up RLS or a MikroORM filter. That part is fine. But the problem is — Medusa has a few places where raw Knex queries go straight to the database and skip the ORM completely. The pricing module calculatePrices(), all three inventory level methods (getReservedQuantity, getAvailableQuantity, getStockedQuantity), the RBAC recursive CTE, the base repository hard delete() — these don't go through MikroORM at all. So if you add a tenant filter at the ORM level it just won't work for these. RLS fixes this because it works at the PostgreSQL level so it catches everything, but if you try to do it in the app you'd have to go fix each one by hand. For subscribers and workflows — I was worried about these at first but after looking into it I think they're actually fine. Events carry the entity ID which is a unique ULID, and everything after that just follows that ID. Like order.placed fires once for one specific order, the subscriber picks it up and processes just that order. It doesn't run for every tenant or anything like that. One event, one run, one email to the right customer. Even when you follow foreign keys (order → customer → address) it's all ULIDs pointing to the correct rows. So no issue there. Where it actually gets problematic is more specific: Scheduled jobs — these don't have an event with an ID. Like "find abandoned carts older than 24h and send reminder emails" — that's just scanning the whole table. No tenant context so it goes through every tenant's carts. And then which sender do you even use for the email? Writing new records from background stuff — if some workflow step creates a fulfillment or notification record, what tenant_id goes on it? Reading by ULID works no problem, but when you write something new there's no way to know which tenant it belongs to. Provider configs — and this one I think is kind of a big deal but I don't see anyone talking about it. Every provider (SendGrid, Stripe, fulfillment, file storage) is a singleton that gets created once when the app starts from medusa-config.ts. Notifications are resolved by channel — one provider per channel, and it actually throws an error if two providers try to use the same channel. So if tenant A and tenant B both want SendGrid but with different API keys and different sender addresses... you just can't do that. The SendGrid provider calls sendgrid.setApiKey() which sets the key globally on the npm module level. Two configs can't even exist at the same time in one process. The Rigby RLS approach does a good job with data isolation — because it's at the PostgreSQL level it catches all those raw Knex queries too. The scheduled job and write-side problems are real but not huge. The provider config thing though — that's a whole different problem and RLS can't help there at all. At this point I honestly think for most cases just running a separate Medusa instance per tenant behind a gateway is the better move. You get everything isolated — data, configs, providers — without having to patch the framework or worry about stuff breaking on upgrades. Yeah managing multiple deployments is more work but it's nothing new, people do it all the time. Would be cool to hear from the Medusa team if there's any plans for native support. Even small things would help a lot — like making provider resolution aware of tenants, or adding tenant context to scheduled job configs. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
Thank you for your amazing framework, It is incredibly useful!
I would like to instantiate multiple Medusa stores for separate tenants, this way I would not need to use multiple different databases for each tenant's store alongside different instances of MedusaJS. It would be much harder to maintain such a setup without Medusa handling it natively, additionally, it would consume much more compute resources.
To do so, we would only really need a store_id or tenant_id on every table. I have a client that would be interested in maintaining this feature in the future as well, otherwise, they will likely maintain a fork of Medusa.
Please let me know if you would be interested in such a feature set, and the requirements that you might have for such a feature!
Beta Was this translation helpful? Give feedback.
All reactions