Transactional method in Scala Play with Slick (similar to Spring @Transactional, maybe?)
Asked Answered
B

1

7

I know scala, as a funcional language, is supposed to work differently from a common OO language, such as Java, but I'm sure there has to be a way to wrap a group of database changes in a single transaction, ensuring atomicity as well as every other ACID property.

As explained in the slick docs (http://slick.lightbend.com/doc/3.1.0/dbio.html), DBIOAction allows to group db operations in a transaction like this:

val a = (for {
  ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result
  _ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)
} yield ()).transactionally

val f: Future[Unit] = db.run(a)

However, my use case (and most real world examples I can think of), I have a code structure with a Controller, which exposes the code for my REST endpoint, that controller calls multiple services and each service will delegate database operations to DAOs.

A rough example of my usual code structure:

class UserController @Inject(userService: UserService) {
  def register(userData: UserData) = {
    userService.save(userData).map(result => Ok(result))
  }
}

class UserService @Inject(userDao: UserDao, addressDao: AddressDao) {
  def save(userData: UserData) = {
    for {
      savedUser <- userDao.save(userData.toUser)
      savedAddress <- addressDao.save(userData.addressData.toAddress)
    } yield savedUser.copy(address = savedAddress)
  }
}

class SlickUserDao {
  def save(user: User) = {
    db.run((UserSchema.users returning UserSchema.users)).insertOrUpdate(user)
  }
}

This is a simple example, but most have more complex business logic in the service layer.

I don't want:

  1. My DAOs to have business logic and decide which database operations to run.
  2. Return DBAction from my DAOs and expose the persistency classes. That completely defeats the purpose of using DAOs in the first place and makes further refactorings much harder.

But I definitely want a transaction around my entire Controller, to ensure that if any code fails, all the changes done in the execution of that method will be rolled back.

How can I implement full controller transactionality with Slick in a Scala Play application? I can't seem to find any documentation on how to do that.

Also, how can I disable auto-commit in slick? I'm sure there is a way and I'm just missing something.

EDIT:

So reading a bit more about it, I feel now I understand better how slick uses connections to the database and sessions. This helped a lot: http://tastefulcode.com/2015/03/19/modern-database-access-scala-slick/.

What I'm doing is a case of composing in futures and, based on this article, there's no way to use the same connection and session for multiple operation of the kind.

Problem is: I really can't use any other kind of composition. I have considerable business logic that needs to be executed in between queries.

I guess I can change my code to allow me to use action composition, but as I mentioned before, that forces me to code my business logic with aspects like transactionality in mind. That shouldn't happen. It pollutes the business code and it makes writing tests a lot harder.

Any workaround this issue? Any git project out there that sorts this out that I missed? Or, more drastic, any other persistence framework that supports this? From what I've read, Anorm supports this nicely, but I may be misunderstanding it and don't want to change framework to find out it doesn't (like it happened with Slick).

Belt answered 6/7, 2016 at 9:48 Comment(0)
F
6

There is no such thing as transactional annotations or the like in slick. Your second "do not want" is actually the way to go. It's totally reasonable to return DBIO[User] from your DAO which does not defeat their purpose at all. It's the way slick works.

class UserController @Inject(userService: UserService) {
  def register(userData: UserData) = {
    userService.save(userData).map(result => Ok(result))
  }
}

class UserService @Inject(userDao: UserDao, addressDao: AddressDao) {
  def save(userData: UserData): Future[User] = {
    val action = (for {
      savedUser <- userDao.save(userData.toUser)
      savedAddress <- addressDao.save(userData.addressData.toAddress)
      whatever <- DBIO.successful(nonDbStuff)
    } yield (savedUser, savedAddress)).transactionally

    db.run(action).map(result => result._1.copy(result._2))
  }
}

class SlickUserDao {
  def save(user: User): DBIO[User] = {
    (UserSchema.users returning UserSchema.users).insertOrUpdate(user)
  }
}
  • The signature of save in your service class is still the same.
  • No db related stuff in controllers.
  • You have full control of transactions.
  • I cannot find a case where the code above is harder to maintain / refactor compared to your original example.

There is also a quite exhaustive discussion that might be interesting for you. See Slick 3.0 withTransaction blocks are required to interact with libraries.

Filler answered 6/7, 2016 at 13:4 Comment(6)
Well, I can actually find several things wrong with it: makes my services explicitly dependant of slick classes, so there's no point having DAOs (no abstraction possible in DB layer). That means my business logic and db access are at the same level. Makes it really hard to test and mock. Also, if I want to change my db access implementation, I'll need to change my services, so any refactor would be painful. It also pollutes my business code with cross-cutting concerns such as delimit a transaction which, again, will probably make testing harder. Can't be the only way to do it. Tks anywayBelt
I agree that it is desirable to avoid an explicit dependency on slick, but AFAIK there is no other solution ATM if you want to work with transactions. I'm not saying this is the definitive solution, in fact I'd be more than happy if someone proves me wrong. Regarding the change of a persistency framework: Since slick is not like your ordinary ORM it's probably going to be a PITA in any case. And to be honest, how many times have you ever switched persistency frameworks in a live system? :) Anyway, I'll follow this thread, maybe someone comes up with a more decent solution.Filler
Eheh I did change persistence framework just recently from Mongo and ReactiveMongo to Postgres and Slick (don't ask...) and I'm sure soon enough, we'll have to change part of it back to Mongo or other document db. I'm kind of trying to avoid another huge refactoring WHEN that happens. Transactions won't really matter for most of that second refactor, but the refactor work itself surely will. Writing unit tests also seems a bit harder. Anyway, let's hope someone else has an alternative answer.Belt
After a lot of research and frustrated attempts, it seems it's simply not possible to completely isolate the persistence layer in slick 3, like I described, so you are right when you say "it's the way to go in slick". I still feel it's a terrible architecture and I probably will never use Slick again in another project until I see this limitation sorted. However, I have to award the bounty and, despite the fact your answer is NOT A SOLUTION for my problem, it is CORRECT and may prevent someone else to waste lots of time trying to do something that's simply not possible.Belt
It's also worth noting that you'll be able to coordinate the actions of several DAO calls from one Service layer method, however at the Application layer you tend to coordinate several Service methods in order to complete a task. How could you ensure all those Service methods participate in a shared transaction? You don't want to pass back Slick types from your Service layer...Sidekick
i only got the example to work if i also make following imports in the DAO and in the Service: protected val dbConfig = dbConfigProvider.get[JdbcProfile] import dbConfig._ import profile.api._ Is it no problem that the DAO and the Service both use own dbConfigProvider? If the Service does not contain those imports there will be a leak-error because of the DAO returning the DBIO-ObjectLoar

© 2022 - 2024 — McMap. All rights reserved.