22 fields limit in Scala 2.11 + Play Framework 2.3 Case classes and functions
Asked Answered
C

8

37

Scala 2.11 is out and the 22 fields limit for case classes seems to be fixed (Scala Issue, Release Notes).

This has been an issue for me for a while because I use case classes to model database entities that have more than 22 fields in Play + Postgres Async. My solution in Scala 2.10 was to break the models into multiple case classes, but I find this solution hard to maintain and extend, and I was hoping I could implement something as described below after switching to Play 2.3.0-RC1 + Scala 2.11.0:

package entities

case class MyDbEntity(
  id: String,
  field1: String,
  field2: Boolean,
  field3: String,
  field4: String,
  field5: String,
  field6: String,
  field7: String,
  field8: String,
  field9: String,
  field10: String,
  field11: String,
  field12: String,
  field13: String,
  field14: String,
  field15: String,
  field16: String,
  field17: String,
  field18: String,
  field19: String,
  field20: String,
  field21: String,
  field22: String,
  field23: String,
) 

object MyDbEntity {
  import play.api.libs.json.Json
  import play.api.data._
  import play.api.data.Forms._

  implicit val entityReads = Json.reads[MyDbEntity]
  implicit val entityWrites = Json.writes[MyDbEntity]
}

The code above fails to compile with the following message for both the "Reads" and the "Writes":

No unapply function found

Updating the "Reads" and "Writes" to:

  implicit val entityReads: Reads[MyDbEntity] = (
    (__ \ "id").read[Long] and
    (__ \ "field_1").read[String]
    ........
  )(MyDbEntity.apply _)  

  implicit val postWrites: Writes[MyDbEntity] = (
    (__ \ "id").write[Long] and
    (__ \ "user").write[String]
    ........
  )(unlift(MyDbEntity.unapply))

Also doesn't work:

  implementation restricts functions to 22 parameters

  value unapply is not a member of object models.MyDbEntity

My understanding is that Scala 2.11 still has some limitations on functions and that something like what I described above is not possible yet. This seems weird to me as I don't see the benefit of lifting the restrictions on case classes if one it's major users cases is still not supported, so I'm wondering if I'm missing something.

Pointers to issues or implementation details are more than welcome! Thanks!

Costar answered 9/5, 2014 at 18:31 Comment(4)
Take a look at relevant pull request description: the very first thing mentioned as limitation is lack of unapply for >22 classes and this was done for a reason (AFAIR, there would be an exponential blowup in classfiles size)Birkner
A case class with >22 params cannot have unapply, since it would have to convert to a TupleX with X>22, and tuples are still limited to 22. The sad part is that Json.format[MyCaseClass] (the macro-based solution) doesn't need to have this limitation (it could realize it is a case class and just extract the fields directly, without unapply, like pattern matching does), but it currently looks for unapply and fails...Tiemroth
For those interested, here is a ticket tracking removing this limitation: github.com/playframework/playframework/issues/3174Calise
The ticket is still open. Is there still a problem with >22 in 2.3 with Scala 2.11.1? Researching different stacks for a new project & wondering if this will be a problem. ThxSpelling
T
13

This is not possible, out of the box, for several reasons:

However it is possible to bypass the second point by either:

First, creating the missing FunctionalBuilder:

class CustomFunctionalBuilder[M[_]](canBuild: FunctionalCanBuild[M]) extends FunctionalBuilder {

    class CustomCanBuild22[A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22](m1: M[A1 ~ A2 ~ A3 ~ A4 ~ A5 ~ A6 ~ A7 ~ A8 ~ A9 ~ A10 ~ A11 ~ A12 ~ A13 ~ A14 ~ A15 ~ A16 ~ A17 ~ A18 ~ A19 ~ A20 ~ A21], m2: M[A22]) {
def ~[A23](m3: M[A23]) = new CustomCanBuild23[A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, A23](canBuild(m1, m2), m3)

  def and[A23](m3: M[A23]) = this.~(m3)

  def apply[B](f: (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22) => B)(implicit fu: Functor[M]): M[B] =
  fu.fmap[A1 ~ A2 ~ A3 ~ A4 ~ A5 ~ A6 ~ A7 ~ A8 ~ A9 ~ A10 ~ A11 ~ A12 ~ A13 ~ A14 ~ A15 ~ A16 ~ A17 ~ A18 ~ A19 ~ A20 ~ A21 ~ A22, B](canBuild(m1, m2), { case a1 ~ a2 ~ a3 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 ~ a9 ~ a10 ~ a11 ~ a12 ~ a13 ~ a14 ~ a15 ~ a16 ~ a17 ~ a18 ~ a19 ~ a20 ~ a21 ~ a22 => f(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) })

  def apply[B](f: B => (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22))(implicit fu: ContravariantFunctor[M]): M[B] =
  fu.contramap(canBuild(m1, m2), (b: B) => { val (a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) = f(b); new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(a1, a2), a3), a4), a5), a6), a7), a8), a9), a10), a11), a12), a13), a14), a15), a16), a17), a18), a19), a20), a21), a22) })

  def apply[B](f1: (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22) => B, f2: B => (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22))(implicit fu: InvariantFunctor[M]): M[B] =
  fu.inmap[A1 ~ A2 ~ A3 ~ A4 ~ A5 ~ A6 ~ A7 ~ A8 ~ A9 ~ A10 ~ A11 ~ A12 ~ A13 ~ A14 ~ A15 ~ A16 ~ A17 ~ A18 ~ A19 ~ A20 ~ A21 ~ A22, B](
    canBuild(m1, m2), { case a1 ~ a2 ~ a3 ~ a4 ~ a5 ~ a6 ~ a7 ~ a8 ~ a9 ~ a10 ~ a11 ~ a12 ~ a13 ~ a14 ~ a15 ~ a16 ~ a17 ~ a18 ~ a19 ~ a20 ~ a21 ~ a22 => f1(a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) },
    (b: B) => { val (a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) = f2(b); new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(new ~(a1, a2), a3), a4), a5), a6), a7), a8), a9), a10), a11), a12), a13), a14), a15), a16), a17), a18), a19), a20), a21), a22) }
  )

  def join[A >: A1](implicit witness1: <:<[A, A1], witness2: <:<[A, A2], witness3: <:<[A, A3], witness4: <:<[A, A4], witness5: <:<[A, A5], witness6: <:<[A, A6], witness7: <:<[A, A7], witness8: <:<[A, A8], witness9: <:<[A, A9], witness10: <:<[A, A10], witness11: <:<[A, A11], witness12: <:<[A, A12], witness13: <:<[A, A13], witness14: <:<[A, A14], witness15: <:<[A, A15], witness16: <:<[A, A16], witness17: <:<[A, A17], witness18: <:<[A, A18], witness19: <:<[A, A19], witness20: <:<[A, A20], witness21: <:<[A, A21], witness22: <:<[A, A22], fu: ContravariantFunctor[M]): M[A] =
  apply[A]((a: A) => (a: A1, a: A2, a: A3, a: A4, a: A5, a: A6, a: A7, a: A8, a: A9, a: A10, a: A11, a: A12, a: A13, a: A14, a: A15, a: A16, a: A17, a: A18, a: A19, a: A20, a: A21, a: A22))(fu)

  def reduce[A >: A1, B](implicit witness1: <:<[A1, A], witness2: <:<[A2, A], witness3: <:<[A3, A], witness4: <:<[A4, A], witness5: <:<[A5, A], witness6: <:<[A6, A], witness7: <:<[A7, A], witness8: <:<[A8, A], witness9: <:<[A9, A], witness10: <:<[A10, A], witness11: <:<[A11, A], witness12: <:<[A12, A], witness13: <:<[A13, A], witness14: <:<[A14, A], witness15: <:<[A15, A], witness16: <:<[A16, A], witness17: <:<[A17, A], witness18: <:<[A18, A], witness19: <:<[A19, A], witness20: <:<[A20, A], witness21: <:<[A21, A], witness22: <:<[A22, A], fu: Functor[M], reducer: Reducer[A, B]): M[B] =
  apply[B]((a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7, a8: A8, a9: A9, a10: A10, a11: A11, a12: A12, a13: A13, a14: A14, a15: A15, a16: A16, a17: A17, a18: A18, a19: A19, a20: A20, a21: A21, a22: A22) =>  reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.append(reducer.unit(a1: A), a2: A), a3: A), a4: A), a5: A), a6: A), a7: A), a8: A), a9: A), a10: A), a11: A), a12: A), a13: A), a14: A), a15: A), a16: A), a17: A), a18: A), a19: A), a20: A), a21: A), a22: A))(fu)

  def tupled(implicit v: VariantExtractor[M]): M[(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22)] =
  v match {
    case FunctorExtractor(fu) => apply { (a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7, a8: A8, a9: A9, a10: A10, a11: A11, a12: A12, a13: A13, a14: A14, a15: A15, a16: A16, a17: A17, a18: A18, a19: A19, a20: A20, a21: A21, a22: A22) => (a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) }(fu)
    case ContravariantFunctorExtractor(fu) => apply[(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22)] { (a: (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22)) => (a._1, a._2, a._3, a._4, a._5, a._6, a._7, a._8, a._9, a._10, a._11, a._12, a._13, a._14, a._15, a._16, a._17, a._18, a._19, a._20, a._21, a._22) }(fu)
    case InvariantFunctorExtractor(fu) => apply[(A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22)]({ (a1: A1, a2: A2, a3: A3, a4: A4, a5: A5, a6: A6, a7: A7, a8: A8, a9: A9, a10: A10, a11: A11, a12: A12, a13: A13, a14: A14, a15: A15, a16: A16, a17: A17, a18: A18, a19: A19, a20: A20, a21: A21, a22: A22) => (a1, a2, a3, a4, a5, a6, a7, a8, a9, a10, a11, a12, a13, a14, a15, a16, a17, a18, a19, a20, a21, a22) }, { (a: (A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22)) => (a._1, a._2, a._3, a._4, a._5, a._6, a._7, a._8, a._9, a._10, a._11, a._12, a._13, a._14, a._15, a._16, a._17, a._18, a._19, a._20, a._21, a._22) })(fu)
    }

  }

  class CustomCanBuild23[A1, A2, A3, A4, A5, A6, A7, A8, A9, A10, A11, A12, A13, A14, A15, A16, A17, A18, A19, A20, A21, A22, A23](m1: M[A1 ~ A2 ~ A3 ~ A4 ~ A5 ~ A6 ~ A7 ~ A8 ~ A9 ~ A10 ~ A11 ~ A12 ~ A13 ~ A14 ~ A15 ~ A16 ~ A17 ~ A18 ~ A19 ~ A20 ~ A21 ~ A22], m2: M[A23]) {
  }

}

and then by providing your own FunctionalBuilderOps instance:

implicit def customToFunctionalBuilderOps[M[_], A](a: M[A])(implicit fcb: FunctionalCanBuild[M]) = new CustomFunctionalBuilderOps[M, A](a)(fcb)

Finally, regarding the first point, I have sent a pull request to try to simplify the current implementation.

Tibbs answered 11/5, 2014 at 1:52 Comment(3)
For Naveen's gist, I got come compile errors. For future reference, you can see the fix here. gist.github.com/angeloh/5fa10d8d5321bf650dd9Volitive
Getting stack overflow when using that Shapeless wrapper.Alegar
See my answer below, re github.com/xdotai/play-json-extensions, it seems there is a simple fix for play. Will be testing next week.Bi
C
3

We were also breaking our models into multiple case classes, but this was quickly becoming unmanageable. We use Slick as our object relational mapper, and Slick 2.0 comes with a code generator that we use to generate classes (which come with apply methods and copy constructors to mimic case classes) along with methods to instantiate models from Json (we do not automatically generate methods to convert models into Json because we have too many special cases to deal with). Using the Slick code generator does not require you to use Slick as your object relational mapper.

This is part of the input to the code generator - this method takes a JsObject and uses it to either instantiate a new model or update an existing model.

private def getItem(original: Option[${name}], json: JsObject, trackingData: TrackingData)(implicit session: scala.slick.session.Session): Try[${name}] = {
  preProcess("$name", columnSet, json, trackingData).flatMap(updatedJson => {
    ${indent(indent(indent(entityColumnsSansId.map(c => s"""val ${c.name}_Parsed = parseJsonField[${c.exposedType}](original.map(_.${c.name}), "${c.name}", updatedJson, "${c.exposedType}")""").mkString("\n"))))}
    val errs = Seq(${indent(indent(indent(indent(entityColumnsSansId.map(c => s"${c.name}_Parsed.map(_ => ())").mkString(", ")))))}).condenseUnit
    for {
      _ <- errs
      ${indent(indent(indent(indent(entityColumnsSansId.map(c => s"${c.name}_Val <- ${c.name}_Parsed").mkString("\n")))))}
    } yield {
      original.map(_.copy(${entityColumnsSansId.map(c => s"${c.name} = ${c.name}_Val").mkString(", ")}))
        .getOrElse(${name}.apply(id = None, ${entityColumnsSansId.map(c => s"${c.name} = ${c.name}_Val").mkString(", ")}))
    }
  })
}

For example, with our ActivityLog model this produces the following code. If "original" is None then this is being called from a "createFromJson" method and we instantiate a new model; if "original" is Some(activityLog) then this is being called from an "updateFromJson" method and we update the existing model. The "condenseUnit" method being called on the "val errs = ..." line takes a Seq[Try[Unit]] and produces a Try[Unit]; if the Seq has any errors then the Try[Unit] concatenates the exception messages. The parseJsonField and parseField methods are not generated - they're just referenced from the generated code.

private def parseField[T](name: String, json: JsObject, tpe: String)(implicit r: Reads[T]): Try[T] = {
  Try((json \ name).as[T]).recoverWith {
    case e: Exception => Failure(new IllegalArgumentException("Failed to parse " + Json.stringify(json \ name) + " as " + name + " : " + tpe))
  }
}

def parseJsonField[T](default: Option[T], name: String, json: JsObject, tpe: String)(implicit r: Reads[T]): Try[T] = {
  default match {
    case Some(t) => if(json.keys.contains(name)) parseField(name, json, tpe)(r) else Try(t)
    case _ => parseField(name, json, tpe)(r)
  }
}

private def getItem(original: Option[ActivityLog], json: JsObject, trackingData: TrackingData)(implicit session: scala.slick.session.Session): Try[ActivityLog] = {
  preProcess("ActivityLog", columnSet, json, trackingData).flatMap(updatedJson => {
    val user_id_Parsed = parseJsonField[Option[Int]](original.map(_.user_id), "user_id", updatedJson, "Option[Int]")
    val user_name_Parsed = parseJsonField[Option[String]](original.map(_.user_name), "user_name", updatedJson, "Option[String]")
    val item_id_Parsed = parseJsonField[Option[String]](original.map(_.item_id), "item_id", updatedJson, "Option[String]")
    val item_item_type_Parsed = parseJsonField[Option[String]](original.map(_.item_item_type), "item_item_type", updatedJson, "Option[String]")
    val item_name_Parsed = parseJsonField[Option[String]](original.map(_.item_name), "item_name", updatedJson, "Option[String]")
    val modified_Parsed = parseJsonField[Option[String]](original.map(_.modified), "modified", updatedJson, "Option[String]")
    val action_name_Parsed = parseJsonField[Option[String]](original.map(_.action_name), "action_name", updatedJson, "Option[String]")
    val remote_ip_Parsed = parseJsonField[Option[String]](original.map(_.remote_ip), "remote_ip", updatedJson, "Option[String]")
    val item_key_Parsed = parseJsonField[Option[String]](original.map(_.item_key), "item_key", updatedJson, "Option[String]")
    val created_at_Parsed = parseJsonField[Option[java.sql.Timestamp]](original.map(_.created_at), "created_at", updatedJson, "Option[java.sql.Timestamp]")
    val as_of_date_Parsed = parseJsonField[Option[java.sql.Timestamp]](original.map(_.as_of_date), "as_of_date", updatedJson, "Option[java.sql.Timestamp]")
    val errs = Seq(user_id_Parsed.map(_ => ()), user_name_Parsed.map(_ => ()), item_id_Parsed.map(_ => ()), item_item_type_Parsed.map(_ => ()), item_name_Parsed.map(_ => ()), modified_Parsed.map(_ => ()), action_name_Parsed.map(_ => ()), remote_ip_Parsed.map(_ => ()), item_key_Parsed.map(_ => ()), created_at_Parsed.map(_ => ()), as_of_date_Parsed.map(_ => ())).condenseUnit
    for {
      _ <- errs
      user_id_Val <- user_id_Parsed
      user_name_Val <- user_name_Parsed
      item_id_Val <- item_id_Parsed
      item_item_type_Val <- item_item_type_Parsed
      item_name_Val <- item_name_Parsed
      modified_Val <- modified_Parsed
      action_name_Val <- action_name_Parsed
      remote_ip_Val <- remote_ip_Parsed
      item_key_Val <- item_key_Parsed
      created_at_Val <- created_at_Parsed
      as_of_date_Val <- as_of_date_Parsed
    } yield {
      original.map(_.copy(user_id = user_id_Val, user_name = user_name_Val, item_id = item_id_Val, item_item_type = item_item_type_Val, item_name = item_name_Val, modified = modified_Val, action_name = action_name_Val, remote_ip = remote_ip_Val, item_key = item_key_Val, created_at = created_at_Val, as_of_date = as_of_date_Val))
        .getOrElse(ActivityLog.apply(id = None, user_id = user_id_Val, user_name = user_name_Val, item_id = item_id_Val, item_item_type = item_item_type_Val, item_name = item_name_Val, modified = modified_Val, action_name = action_name_Val, remote_ip = remote_ip_Val, item_key = item_key_Val, created_at = created_at_Val, as_of_date = as_of_date_Val))
    }
  })
}
Crux answered 11/5, 2014 at 6:19 Comment(0)
S
2

You can use Jackson's Scala module. Play's json feature is built upon Jackson scala . I don't know why they put a 22 field limit here while jackson supports more than 22 fields. It may make sense that a function call can never use more than 22 parameters, but we can have hundreds of columns inside a DB entity, so this restriction here is ridiculous and makes Play a less productive toy. check this out:

import com.fasterxml.jackson.databind.ObjectMapper
import com.fasterxml.jackson.module.scala.experimental.ScalaObjectMapper
import com.fasterxml.jackson.module.scala.DefaultScalaModule

object JacksonUtil extends App {
  val mapper = new ObjectMapper with ScalaObjectMapper
  mapper.registerModule(DefaultScalaModule)


  val t23 = T23("a","b","c","d","e","f","g","h","i","j","k","l","m","n","o","p","q","r","s","t","u","v","w")

  println(mapper.writeValueAsString(t23))
 }
case class T23(f1:String,f2:String,f3:String,f4:String,f5:String,f6:String,f7:String,
    f8:String,f9:String,f10:String,f11:String,f12:String,f13:String,f14:String,f15:String,
    f16:String,f17:String,f18:String,f19:String,f20:String,f21:String,f22:String,f23:String)
Streptomycin answered 31/3, 2015 at 7:58 Comment(0)
B
2

It seems this handles it all nicely.

+22 field case class formatter and more for play-json https://github.com/xdotai/play-json-extensions

Supports Scala 2.11.x, 2.12.x, and 2.13.x and play 2.3, 2.4, 2.5 and 2.7

And is referenced in the play-json issue as the preferred solution (but not yet merged)

Bi answered 1/8, 2019 at 21:30 Comment(0)
L
1

I'm making a library. please try this https://github.com/xuwei-k/play-twenty-three

Latten answered 14/2, 2015 at 7:39 Comment(2)
Didn't read all the code, but I've seen at least a reference to 252 fields!!! LOL :)Conspire
"640K should be enough for everyone"Hanukkah
S
1

cases where case classes might not work; one of these cases is that the case classes cannot take more than 22 fields. Another case can be that you do not know about schema beforehand. In this approach, the data is loaded as an RDD of the row objects. Schema is created separately using the StructType and StructField objects, which represent a table and a field respectively. Schema is applied to the row RDD to create DataFrame in Spark.

Savino answered 14/9, 2015 at 14:1 Comment(0)
A
0

I tried Shapeless "Automatic Typeclass Derivation" based solution proposed in another answer, and it didn't work for our models - was throwing StackOverflow exceptions (case class with ~30 fields and 4 nested collections of case classes with 4-10 fields).

So, we've adopted this solution and it worked flawlessly. Confirmed that by writing ScalaCheck test. Notice, that it requires Play Json 2.4.

Alegar answered 7/10, 2015 at 8:19 Comment(0)
C
0

In dotty (Scala 3) now you can use more than 22 fields in Case class.

Curley answered 11/9, 2020 at 15:33 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.