Sidestepping service object design mistakes
Rails is famous for getting projects up and running quickly. If you're willing to play along with the "Rails Way", you'll get sensible defaults for everything from project file structure to HTTP request routing. Rails has established conventions for the three big pieces: models, views, and controllers (MVC). Many projects don't need anything beyond that. With a little practice, you can figure out how the framework will code slots for you to input your code for your project.
The above works well for simple projects. However, as projects grow, the default MVC buckets aren't always a natural fit for every requirement. For example, how do you test an async job processing tool that's outside of the pure MVC purview? If you have tricky duplicate behavior, where should it go? If a request to your web application triggers code for dynamic price calculation based on first-time purchase behavior, a number of factors come into play. Do you really want to test every possible path with an end-to-end simulation for factors like validating a promo code (which may or may not be valid), processing payment (which could fail), and completing an event ticket purchase (which has to allocate a specific seat from a pool from which other people are also purchasing in real time)?
At scale, stuff starts to get messy. The Rails framework gives you rules about how things work and where they go. Stepping out of the framework means you need to make decisions. A deep discussion of these decision criteria is outside the scope of a single blog post, but we can highlight some popular best practices. One of the most popular is the Service Object pattern.
Service objects give you a place to develop and test business logic outside of the Rails framework. The tradeoff is that you need to decide on your own patterns and guardrails. Service object patterns offer the benefit of broad flexibility. You can trigger and coordinate other behaviors without trying to shoehorn those behaviors into premature abstractions. This flexibility can also be a downside: the loose structure means it’s easy for every developer on your team to establish their own patterns and conventions. That might lead to duplication, conflict, and irregular decisions that are hard to revert because they’re scattered throughout the codebase. So while service objects are a useful tool, you'll need to recognize the double-edged sword inherent within their flexibility. Some upfront due diligence about your team's patterns and practices will help you dodge rework and be smart about service objects for the long-term.
High-Level Uses - Rails Controllers
It's no accident that most of the service object libraries in the wild (including Interactor, ActiveInteraction, SimpleCommand, and LightService) start with the use case of organizing business logic inside a Rails controller. The author of one library, Micro::Case, has an entire repository just comparing service object implementations in controllers! Rails controllers are good at their primary job - mapping inbound url requests to the desired business logic trigger. But, it can be tricky to test the application parts independently from HTTP parts.
Rails adds some structural conventions for pre-organizing work. Its main job, once that structure is set up, is to establish a place to put your specific behaviors for inbound HTTP requests. Since actions are just methods on a controller, you don't need any additional objects at this point. Simply match up the request details to the right controller method and drop in your code. Now you have trigger-friendly business logic, with a lot of automatic tooling (hooks, authentication, request parameter preprocessing, etc.) provided for you by the framework.
Why not just use controllers? Why did "fat model, skinny controller" become a best practice once Rails projects started existing long enough to require maintenance? Testing business-logic behavior in a controller can be tricky. You can only test a controller by simulating an HTTP request to an endpoint, and then verifying the HTTP response, as well as examining any intended side effects. For a framework tool that is designed to handle web traffic, that makes intuitive sense. On the other hand, keeping your business logic in controllers it makes it hard to exercise only your business logic, separate from the overhead of HTTP traffic details.
In addition to tangling up framework and business logic in testing, controllers also encourage you to weave HTTP logic and business logic together in your code. Conditional logic and early-exiting in controllers are just finicky enough that it's difficult to extract those behaviors once they've been written into a controller action.
For example, if you squint a little and look at the lines of this file that are ONLY related to Rails controller/view context, you get something like this:
It's easy to braid Rails into your business logic, but the results can be challenging to work with in the future -- how confident are you that you could extract part of this update action into a separate method and everything would still work? Are you sure that NotificationClient class won't throw an error? Did you remember to return in all the right places?
It might help your mindset to consider things the following way: when it comes to meaningful business logic, Rails controller actions are just handlers. The framework has strong opinions about conventions for mapping HTTP requests and rendered pages to specific handlers by name, but there's nothing magic about how business logic is organized inside a single controller. That logic is the heart and soul of your application.
The content of these handlers is behaviour your application provides; you should be in charge. This is where service objects can excel -- you can develop and refine the specific behaviour you want in raw Ruby, test it by making method calls instead of HTTP requests, and clarify what success and failure means for each operation. Once it's written, you can drop it into a controller action, with your success and failure cases available to map to appropriate HTTP response codes.
Sidestepping Design Mistakes
In recognizing that your application behavior doesn’t have to live inside the boundaries of a framework, you’re ready to start working service objects into your codebase. At this moment, your team should consider the details of what happens around the boundaries of your service objects — inputs, outputs, and errors. If there is no coordinated effort to make these details consistent, pieces functioning well in isolation could cause tricky design problems when they interact down the road.
Here are a few of the major considerations you should look at:
Inconsistent implementation
There's no perfect definition of a service object. There is also considerable overlap with terms like "operation", "use case", and "command". There is no established convention for precisely how a service object behaves. Does it handle failures by raising custom errors? Is it only functional, with no side effects? Is it allowed to work with framework classes for things like loading from a database? If your team agrees on the idea of a service object, but not on the upfront details of some basic implementation, you've added unpredictability with no real benefit.
The most pragmatic outlook here is: "the best solution is the one that is consistent and predictable.". There are usually some basic high-level requirements:
- expresses a single user action
- ...by coordinating and organizing the actions of other objects
- ...and expressing both success and anticipated failure cases.
- ...behind a defined interface
If you can capture all the requirements for your service object pattern with a custom, in-house tool – great! Build and use that.
On the other hand, maybe you don't need to reinvent the wheel. One advantage of choosing an existing service-object library is sidestepping the initial rounds of bikeshedding while the team figures out the API/implementation details. There are several Rails-friendly service object libraries, and most of them are pretty basic. That's a feature, not a bug. All the heavy lifting should be in your code, and a service object should be a thin coordinating wrapper. At OpsLevel, we started our service object adventure with the Interactor gem.
An Interactor is a straightforward tool. Every high-level user action is represented as a standalone class with a single, class-level call method. That method takes a flexible set of key-value arguments, coordinates the operation of other classes, and returns a result object. The result will report whether the operation was a success or failure, and hold onto any content you attach to it as part of the operation (eg: created records, or content from API calls, or even diagnostic failure messages).
This process covers the requirements presented above. It also allows us to test these behaviors outside of Rails, and handle failures in a predictable way. It looks like a pretty loose tool, and that's the point. It enables the coordination of tricky interactions, without forcing this behavior to spread across multiple "Rails default" objects and methods. The predictable interface (keyword args in, result object out) doesn't get in the way when we refactor towards better abstractions as we uncover them.
Not validating your inputs
Input validation is important for two reasons. First, the Interactor gem (and several other service object implementations) don't provide input validation, leaving it as an exercise for the user. Second, without considering input validation, you'll be surprised when everything functions at first, then later fails when a nil slips through. Sooner or later, the Ruby behavior of returning nil for looking up missing hash keys or out-of-bounds array indices comes for us all.
In vanilla Rails, there are two places where inputs go through validation.
- In controllers, where the "strong parameters" define which parameters are allowed to pass through to the model layer
- In models, where stricter validation is performed including: checks for size, content, type, range, presence, format and more
For a straightforward app, where one controller maps cleanly to one model, this might be enough. However, teams bring in service objects to handle trickier interactions. What happens when your service uses two models, and there's an input value that isn't used by either model? Or that's used by both? Naive uses of the service object pattern will have a single argument: params. You'll need to walk through the code to uncover which specific input names and values indicate the correct use of that object.
There's also a conceptual separation here. This is not model validation (although you could use ActiveModel::Validations as part of your chosen validation solution). Rather, this is validation of the inputs required for the execution of your business logic. Model validation is about data consistency near the persistence layer. The service-object-as-handler's main job is coordinating and organizing. If you validate the inputs upfront, you won't need defensive checking throughout the execution. Forethought here can make problems easier to debug, and failure cases easier to define.
In additional to the above, the biggest reason to pick a single validation pattern is to prevent duplicate work. Every developer on the project will implement a bunch of similar-but-not-identical validation helpers. As a team, you may as well pick a single way to do it, and adjust as you go.
Not considering the shape of your errors
The driving force behind this warning is the same as for input validation. Streamline, or risk every developer making a separate, slightly different and incompatible decision about how to go forward. This is not necessarily a difficult problem, but the loose nature of service objects often means you just have to decide how to do it yourself. Expect to adjust this a little, and put this in a helper that enforces a shape (and that might give you a buffer if you discover you need to change.)
This warning is also important if you use service objects as request handlers, like we are in this context. With service objects as handlers, the most likely behavior is to feed the service outputs (including the errors) right into the response. Rails models have their own validation error shapes. Are you going to pass those forward? What if there are multiple models involved? If your business-logic error is not a model error, does that need a different shape, or should it conform to the other errors?
There are several potential error shapes for these cases, and a lot of them seem "right-ish", especially if the response-generating logic for the request is written simultaneously. But these inconsistencies will only grow more annoying over time. It is difficult to write generic error-case flash message in views, or write response helpers for a consistent JSON API, or convert the errors to a GraphQL type.
Not leveraging your failure cases
A core idea of the service object is that it can model failures without throwing errors. Imagine a world where, if you entered the wrong password, the site would return a 500 (and the on-call engineer got paged). Business-logic failures are not "errors." They are predictable, valid outcomes for a given use case. They indicate to the user that the behavior they expect didn't happen. Maybe the user needs to change something about their request, or maybe a permissions issue will prevent this user from executing this action. The user didn't get what they wanted, because your system's business logic rejected it, and not because of a programming error.
Expressing the many flavors of failures is an important part of service object behavior. If you're not examining and leveraging the failure modes, you're treating the service object as a callback, which might not be the right tool for the job. That leads to another stumbling block...
Defensively avoiding every possible error
Using a service object means that you've anticipated some potential business-logic failures ("you don't have the required permissions", "you can't adjust an order that has already shipped", "only one of these categories can be applied at one time", etc.). Hopefully, you're validating your inputs as well ("this is not a valid URL", "end date must be after start date", "the magic word can't be empty", etc.). Developers don't tend to love seeing exception stack traces. So, once failure-handling is in place, there's a temptation to take it too far and "helpfully" catch every error, to be handled as a failure case. It's rescue nil, as a class!
"Failurizing" every possible error can complicate the process of detecting and debugging legitimate errors. Maybe the application's persistence layer is in a weird state. Maybe a previous HTTP request didn't correctly set up the preconditions your service object's attempted job. These real issues require attention. Removing the noisy error in your particular service object will just make it harder to notice. Exceptions should be for exceptional circumstances, as in "not part of the normal flow." So listen to your errors. If you experience an unexpected error, see if you can fix the problem upstream. Don't just blindly add a "rescue" and consider it a correctly-modeled failure case.
Is it turtles all the way down?
Controllers are the definitive example of service object usage in a Rails application. But they’re not the only uses. Many teams use service objects at several different levels of granularity within an application. Many service object libraries anticipate this usage by providing tooling for composing behaviors out of individual services. Each use case also shows interesting new ways to think about failure modes. They exemplify the benefit of considering different potential mistakes, before they happen. There’s a lot more work that a team can do, with some upfront planning around the kinds of guardrails that should go up around a flexible tool like a service object.