浅谈Slick(3)- Slick201:从fp角度了解Slick

2024-04-09 04:58

本文主要是介绍浅谈Slick(3)- Slick201:从fp角度了解Slick,希望对大家解决编程问题提供一定的参考价值,需要的开发者们随着小编来一起学习吧!

  我在上期讨论里已经成功的创建了一个简单的Slick项目,然后又尝试使用了一些最基本的功能。Slick是一个FRM(Functional Relational Mapper),是为fp编程提供的scala SQL Query集成环境,可以让编程人员在scala编程语言里用函数式编程模式来实现对数据库操作的编程。在这篇讨论里我想以函数式思考模式来加深了解Slick。我对fp编程模式印象最深的就是类型匹配:从参数类型和返回结果类型来了解函数功能。所以上面我所指的函数式思考方式主要是从Slick函数的类型匹配角度来分析函数所起的作用和具体使用方式。

我们先了解一下建表过程:

import slick.driver.H2Driver.api._
object slick201 {//projection case classes 表列模版case class Coffee(id: Option[Long],name: String,sup_ID: Int,price: Double,grade: Grade,total: Int)case class Supplier(id: Option[Int],name: String,address: String,website: Option[String])//自定义字段abstract class Grade(points: Int)object Grade {case object Premium extends Grade(2)case object Quality extends Grade(1)case object Bestbuy extends Grade(0)def fromInt(p: Int) = p match {case 2 => Premiumcase 1 => Qualitycase 0 => Bestbuy}def toInt(g: Grade) = g match {case Premium => 2case Quality => 1case Bestbuy => 0}implicit val customColumn: BaseColumnType[Grade] =MappedColumnType.base[Grade,Int](Grade.toInt, Grade.fromInt)}//schema 表行结构定义class Coffees(tag: Tag) extends Table[Coffee](tag, "COFFEES") {def id = column[Long]("COF_ID", O.AutoInc, O.PrimaryKey)def name = column[String]("COF_NAME")def price = column[Double]("COF_PRICE")def supID = column[Int]("COF_SUP")def grade = column[Grade]("COF_GRADE", O.Default(Grade.Bestbuy))def total = column[Int]("COF_TOTAL", O.Default(0))def * = (id.?,name,supID,price,grade,total) <> (Coffee.tupled, Coffee.unapply)def supplier = foreignKey("SUP_FK",supID,suppliers)(_.id,onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)def nameidx = index("NM_IX",name,unique = true)}val coffees = TableQuery[Coffees]class Suppliers(tag: Tag) extends Table[Supplier](tag, "SUPPLIERS") {def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc)def name = column[String]("SUP_NAME")def address = column[String]("SUP_ADDR", O.Default("-"))def website = column[Option[String]]("SUP_WEB")def * = (id.?, name, address, website) <> (Supplier.tupled, Supplier.unapply)def addidx = index("ADDR_IX",(name,address),unique = true)}val suppliers = TableQuery[Suppliers]}

我尽量把经常会遇到的情况如:定义字段、建索引、默认值、自定义字段等都作了尝试。coffees和suppliers代表了最终的数据表Query,def * 定义了这个Query的默认返回结果字段。

所有的定义都是围绕着表行(Table Row)结构进行的,包括:表属性及操作(Table member methods)、字段(Column)、字段属性(ColumnOptions)。表行定义操作方法基本都在slick.lifted.AbstractTable里、表属性定义在slick.model命名空间里、而大部分的帮助支持函数都在slick.lifted命名空间的其它对象里。

表行的实际类型如下:

abstract class Table[T](_tableTag: Tag, _schemaName: Option[String], _tableName: String) extends AbstractTable[T](_tableTag, _schemaName, _tableName) { table => ...}/** The profile-independent superclass of all table row objects.* @tparam T Row type for this table. Make sure it matches the type of your `*` projection. */
abstract class AbstractTable[T](val tableTag: Tag, val schemaName: Option[String], val tableName: String) extends Rep[T] {...}


如上所示,Table[T] extends AbstractTable[T]。现在所有表行定义操作函数应该在slick.profile.relationalTableComponent.Table里可以找得到。值得注意的是表行的最终类型是Rep[T],T可能是case class或者Tuple,被升格(lift)到Rep[T]。所以大部分表行定义的支持函数都是在slick.lifted命名空间内的。

上面我们使用了模版对应表行定义方式,所有列都能和模版case class对应。那么在定义projection def * 时就需要使用<>函数:

def <>[R : ClassTag](f: (U => R), g: (R => Option[U])) = new MappedProjection[R, U](shape.toNode(value), MappedScalaType.Mapper(g.andThen(_.get).asInstanceOf[Any => Any], f.asInstanceOf[Any => Any], None), implicitly[ClassTag[R]])

f,g是两个case class <> Tuple转换函数。在上面的例子里我们提供的是tupled和unapply,效果就是这样的:

  Coffee.tupled//res2: ((Option[Long], String, Int, Double, Grade, Int)) => Coffee = <function1>Coffee.unapply _//res3: Coffee => Option[(Option[Long], String, Int, Double, Grade, Int)] = <function1>


res2 >>> 把tuple: (...)转成coffee,res2 >>> 把coffee转成Option[(...)]

TableQuery[T]继承了Query[T]:slick.lifted.Query.scala

/** Represents a database table. Profiles add extension methods to TableQuery* for operations that can be performed on tables but not on arbitrary* queries, e.g. getting the table DDL. */
class TableQuery[E <: AbstractTable[_]](cons: Tag => E) extends Query[E, E#TableElementType, Seq] {...}
...
sealed trait QueryBase[T] extends Rep[T]/** An instance of Query represents a query or view, i.e. a computation of a* collection type (Rep[Seq[T]]). It is parameterized with both, the mixed* type (the type of values you see e.g. when you call map()) and the unpacked* type (the type of values that you get back when you run the query).** Additional extension methods for queries containing a single column are* defined in [[slick.lifted.SingleColumnQueryExtensionMethods]].*/
sealed abstract class Query[+E, U, C[_]] extends QueryBase[C[U]] { self =>...}

所有Query对象里提供的函数TableQuery类都可以调用。上面例子里coffees,suppliers实际是数据库表COFFEES,SUPPLIERS的Query实例,它们的默认字段集如:coffees.result是通过def * 定义的(除非用map或yield改变默认projection)。在slick.profile.RelationalProfile.TableQueryExtensionMethods里还有专门针对TableQuery类型的函数如schema等。

好了,来到了Query才算真正进入主题。Query可以说是Slick最核心的类型了。所有针对数据库的读写操作都是通过Query产生SQL语句发送到数据库实现的。Query是个函数式类型,即高阶类型Query[A]。A代表生成SQL语句的元素,通过转变A可以实现不同的SQL语句构建。不同功能的Query包括读取(retreive)、插入(insert)、更新(update)、删除(delete)都是通过Query变形(transformation)实现的。所有Query操作函数的款式:Query[A] => Query[B],是典型的函数式编程方式,也是scala集合操作函数款式。我们先从数据读取Query开始,因为上面我们曾经提到过可以通过map来决定新的结果集结构(projection):

val q1 = coffees.resultq1.statements.head//res0: String = select "COF_ID", "COF_NAME", "COF_SUP", "COF_PRICE", "COF_GRADE", "COF_TOTAL" from "COFFEES"val q2 = coffees.map(r => (r.id, r.name)).resultq2.statements.head//res1: String = select "COF_ID", "COF_NAME" from "COFFEES"val q3 = (for (c <- coffees) yield(c.id,c.name)).resultq3.statements.head//res2: String = select "COF_ID", "COF_NAME" from "COFFEES"


因为map和flatMap的函数款式是:

map[A,B](Q[A])(A=>B]):Q[B], flatMap[A,B](Q[A])(A => Q[B]):Q[B]

所以不同的SQL语句基本上是通过Query[A] => Query[B]这种对高阶类型内嵌元素进行转变的函数式操作方式实现的。下面是一个带筛选条件的Query:

  val q = coffees.filter(_.price > 100.0).map(r => (r.id,r.name)).resultq.statements.head//res3: String = select "COF_ID", "COF_NAME" from "COFFEES" where "COF_PRICE" > 100.0val q4 = coffees.filter(_.price > 100.0).take(4).map(_.name).resultq4.statements.head//res4: String = select "COF_NAME" from "COFFEES" where "COF_PRICE" > 100.0 limit 4val q5 = coffees.sortBy(_.id.desc.nullsFirst).map(_.name).drop(3).resultq5.statements.head//res5: String = select "COF_NAME" from "COFFEES" order by "COF_ID" desc nulls first limit -1 offset 3

再复杂一点的Query,比如说join两个表:

val q6 = for {(c,s) <- coffees join suppliers on (_.supID === _.id)} yield(c.id,c.name,s.name)q6.result.statements.head//res6: String = select x2."COF_ID", x2."COF_NAME", x3."SUP_NAME" from "COFFEES" x2, "SUPPLIERS" x3 where x2."COF_SUP" = x3."SUP_ID"val q7 = for {c <- coffeess <- suppliers.filter(c.supID === _.id)} yield(c.id,c.name,s.name)q7.result.statements.head//res7: String = select x2."COF_ID", x2."COF_NAME", x3."SUP_NAME" from "COFFEES" x2, "SUPPLIERS" x3 where x2."COF_SUP" = x3."SUP_ID"


还有汇总类型的Query:

  coffees.map(_.price).max.result.statements.head//res10: String = select max("COF_PRICE") from "COFFEES"coffees.map(_.total).sum.result.statements.head//res11: String = select sum("COF_TOTAL") from "COFFEES"coffees.length.result.statements.head//res12: String = select count(1) from "COFFEES"coffees.filter(_.price > 100.0).exists.result.statements.head//res13: String = select exists(select "COF_TOTAL", "COF_NAME", "COF_SUP", "COF_ID", "COF_PRICE", "COF_GRADE" from "COFFEES" where "COF_PRICE" > 100.0)


Query是个monad,它可以实现函数组合(functional composition)。如上所示:所有Query操作函数都是Query[A]=>Query[B]形式的。由于Query[A]里面的A类型是Rep[T]类型,是SQL语句组件类型。典型函数如flatMap的调用方式是:flatMap{a => MakeQuery(a ...)},可以看到下一个Query的构成可能依赖a值,而a的类型是表行或列定义。所以Query的函数组合就是SQL语句的组合,最终结果是产生目标SQL语句。

Slick处理数据的方式是通过组合相应的SQL语句后发送给数据库去运算的,相关SQL语句的产生当然是通过Query来实现的:

  val qInsert = coffees += Coffee(Some(0),"American",101,56.0,Grade.Bestbuy,0)qInsert.statements.head
//res10: String = insert into "COFFEES" ("COF_NAME","COF_SUP","COF_PRICE","COF_GRADE","COF_TOTAL")  values (?,?,?,?,?)val qInsert2 = coffees.map{r => (r.name, r.supID, r.price)} += ("Columbia",101,102.0)qInsert2.statements.head
//res11: String = insert into "COFFEES" ("COF_NAME","COF_SUP","COF_PRICE")  values (?,?,?)val qInsert3 = (suppliers.map{r => (r.id,r.name)}).returning(suppliers.map(_.id)) += (101,"The Coffee Co.,")qInsert3.statements.head
//res12: String = insert into "SUPPLIERS" ("SUP_NAME")  values (?)


从qInsert3产生的SQL语句来看:jdbc返回数据后还必须由Slick进一步处理后才能返回用户要求的结果值。下面是一些其它更改数据的Query示范:

  val qDelete = coffees.filter(_.price === 0.0).deleteqDelete.statements.head//res17: String = delete from "COFFEES" where "COFFEES"."COF_PRICE" = 0.0val qUpdate = for (c <- coffees if (c.name === "American")) yield c.priceqUpdate.update(10.0).statements.head//res18: String = update "COFFEES" set "COF_PRICE" = ? where "COFFEES"."COF_NAME" = 'American'


update query必须通过for-comprehension的yield来确定更新字段。
Slick3.x最大的改进就是采用了functional I/O技术。具体做法就是引进DBIOAction类型,这是一个free monad。通过采用free monad的延迟运算模式来实现数据库操作动作的可组合性(composablility)及多线程运算(concurrency)。

DBIOAction类型款式如下:

sealed trait DBIOAction[+R, +S <: NoStream, -E <: Effect] extends Dumpable {
...}
package object dbio {/** Simplified type for a streaming [[DBIOAction]] without effect tracking */type StreamingDBIO[+R, +T] = DBIOAction[R, Streaming[T], Effect.All]/** Simplified type for a [[DBIOAction]] without streaming or effect tracking */type DBIO[+R] = DBIOAction[R, NoStream, Effect.All]val DBIO = DBIOAction
}


DBIO[+R]和StreamingDBIO[+R,+T]分别是固定类型参数S和E的类型别名,用它们来简化代码。所有的数据库操作函数包括result、insert、delete、update等都返回DBIOAction类型结果:

  def result: DriverAction[R, S, Effect.Read] = {...}def delete: DriverAction[Int, NoStream, Effect.Write] = {...}def update(value: T): DriverAction[Int, NoStream, Effect.Write] = {...}def += (value: U): DriverAction[SingleInsertResult, NoStream, Effect.Write] = {...}


上面的DriverAction是DBIOAction的子类。因为DBIOAction是个free monad,所以多个DBIOAction可以进行组合,而在过程中是不会立即产生DBIO副作用的。我们只能通过DBIOAction类型的运算器来对DBIOAction的组合进行运算才会正真进行数据库数据读写。DBIOAction运算函数款式如下:

/** Run an Action asynchronously and return the result as a Future. */final def run[R](a: DBIOAction[R, NoStream, Nothing]): Future[R] = runInternal(a, false)


run函数返回Future[R],代表在异步线程运算完成后返回R类型值。一般来讲Query.result返回R类型为Seq[?]。

DBIOAction只是对数据库操作动作的描述,不是实际的读写,所以DBIOAction可以进行组合。所谓组合的意思实际上就是把几个动作连续起来。DBIOAction的函数组件除monad通用的map、flatMap、sequence等,还包括了andThen、zip等合并操作函数,andThen可以返回最后一个动作结果、zip在一个pair里返回两个动作的结果。因为DBIOAction是monad,所以for-comprehension应该是最灵活、最强大的组合方式了。我们来试试用上面Query产生的动作来进行一些组合示范:

  val initSupAction = suppliers.schema.create andThen qInsert3val createCoffeeAction = coffees.schema.createval insertCoffeeAction = qInsert zip qInsert2val initSupAndCoffee = for {_ <- initSupAction_ <- createCoffeeAction(i1,i2) <- insertCoffeeAction } yield (i1,i2)


我们可以任意组合这些操作步骤,因为它们的返回结果类型都是DBIOAction[R]:一个free monad。大多数时间这些动作都是按照一定的流程顺序组合的。可能有些时候下一个动作需要依赖上一个动作产生的结果,这个时候用for-comprehension是最适合的了:

//先选出所有ESPRESSO开头的coffee名称,然后逐个删除val delESAction = (for {ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)} yield ()).transactionally//delESAction: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read ...//对一个品种价格升10%def raisePriceAction(i: Long, np: Double, pc: Double) =(for(c <- coffees if (c.id === i)) yield c.price).update(np * pc)//raisePriceAction: raisePriceAction[](val i: Long,val np: Double,val pc: Double) => slick.driver.H2Driver.DriverAction[Int,slick.dbio.NoStream,slick.dbio.Effect.Write]//对所有价格<100的coffee加价val updatePriceAction = (for {ips <- coffees.filter(_.price < 100.0).map(r => (r.id, r.price)).result_ <- DBIO.seq{ips.map { ip => raisePriceAction(ip._1, ip._2, 110.0)}: _* }} yield()).transactionally//updatePriceAction: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read ...


另外,像monad的point:successful(R)可以把R升格成DBIOAction,failed(T)可以把T升格成DBIOAction[T]:

  DBIO.successful(Supplier(Some(102),"Coffee Company","",None))//res19: slick.dbio.DBIOAction[Supplier,slick.dbio.NoStream,slick.dbio.Effect] = SuccessAction(Supplier(Some(102),Coffee Company,,None))DBIO.failed(new Exception("oh my god..."))//res20: slick.dbio.DBIOAction[Nothing,slick.dbio.NoStream,slick.dbio.Effect] = FailureAction(java.lang.Exception: oh my god...)


DBIOAction还有比较完善的事后处理和异常处理机制:

//主要示范事后处理机制,不必理会功能的具体目的是否有任何意义qInsert.andFinally(qDelete)//res21: slick.dbio.DBIOAction[Int,slick.dbio.NoStream,slick.dbio.Effect.Write with slick.dbio.Effect.Write] = slick.dbio.SynchronousDatabaseAction$$anon$6@1d46b337updatePriceAction.cleanUp ({ case Some(e) => initSupAction; DBIO.failed(new Exception("oh my..."))case _ => qInsert3},true)//res22: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read ...raisePriceAction(101,10.0,110.0).asTry//res23: slick.dbio.DBIOAction[scala.util.Try[Int],slick.dbio.NoStream,slick.dbio.Effect.Write] = slick.dbio.SynchronousDatabaseAction$$anon$9@60304a44


从上面的这些示范例子我们认识到DBIOAction的函数组合就是数据库操作步骤组合、实际上就是程序的组合或者是功能组合:把一些简单的程序组合成功能更全面的程序,然后才运算这个组合而成的程序。DBIOAction的运算函数run的函数款式如下:

/** Run an Action asynchronously and return the result as a Future. */final def run[R](a: DBIOAction[R, NoStream, Nothing]): Future[R] = runInternal(a, false)


对DBIOAction进行运算后的结果是个Future类型,也是一个高阶类型,同样可以用map、flatMap、sequence、andThen等泛函组件进行函数组合。可以参考下面的这个示范:

  import slick.jdbc.meta.MTableimport scala.concurrent.ExecutionContext.Implicits.globalimport scala.concurrent.duration.Durationimport scala.concurrent.{Await, Future}import scala.util.{Success,Failure}val db = Database.forURL("jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1", driver="org.h2.Driver")def recreateCoffeeTable: Future[Unit] = {db.run(MTable.getTables("Coffees")).flatMap {case tables if tables.isEmpty => db.run(coffees.schema.create).andThen {case Success(_) => println("coffee table created")case Failure(e) => println(s"failed to create! ${e.getMessage}")  }case _ => db.run((coffees.schema.drop andThen coffees.schema.create)).andThen {case Success(_) => println("coffee table recreated")case Failure(e) => println(s"failed to recreate! ${e.getMessage}")}   }}


好了,下面是这次讨论的示范代码:

import slick.driver.H2Driver.api._object slick201 {//projection case classes 表列模版case class Coffee(id: Option[Long],name: String,sup_ID: Int,price: Double,grade: Grade,total: Int)case class Supplier(id: Option[Int],name: String,address: String,website: Option[String])//自定义字段abstract class Grade(points: Int)object Grade {case object Premium extends Grade(2)case object Quality extends Grade(1)case object Bestbuy extends Grade(0)def fromInt(p: Int) = p match {case 2 => Premiumcase 1 => Qualitycase 0 => Bestbuy}def toInt(g: Grade) = g match {case Premium => 2case Quality => 1case Bestbuy => 0}implicit val customColumn: BaseColumnType[Grade] =MappedColumnType.base[Grade,Int](Grade.toInt, Grade.fromInt)}//schema 表行结构定义class Coffees(tag: Tag) extends Table[Coffee](tag, "COFFEES") {def id = column[Long]("COF_ID", O.AutoInc, O.PrimaryKey)def name = column[String]("COF_NAME")def price = column[Double]("COF_PRICE")def supID = column[Int]("COF_SUP")def grade = column[Grade]("COF_GRADE", O.Default(Grade.Bestbuy))def total = column[Int]("COF_TOTAL", O.Default(0))def * = (id.?,name,supID,price,grade,total) <> (Coffee.tupled, Coffee.unapply)def supplier = foreignKey("SUP_FK",supID,suppliers)(_.id,onUpdate = ForeignKeyAction.Restrict, onDelete = ForeignKeyAction.Cascade)def nameidx = index("NM_IX",name,unique = true)}val coffees = TableQuery[Coffees]class Suppliers(tag: Tag) extends Table[Supplier](tag, "SUPPLIERS") {def id = column[Int]("SUP_ID", O.PrimaryKey, O.AutoInc)def name = column[String]("SUP_NAME")def address = column[String]("SUP_ADDR", O.Default("-"))def website = column[Option[String]]("SUP_WEB")def * = (id.?, name, address, website) <> (Supplier.tupled, Supplier.unapply)def addidx = index("ADDR_IX",(name,address),unique = true)}val suppliers = TableQuery[Suppliers]class Bars(tag: Tag) extends Table[(Int,String)](tag,"BARS") {def id = column[Int]("BAR_ID",O.AutoInc,O.PrimaryKey)def name = column[String]("BAR_NAME")def * = (id, name)}val bars = TableQuery[Bars]Coffee.tupled//res2: ((Option[Long], String, Int, Double, Grade, Int)) => Coffee = <function1>Coffee.unapply _//res3: Coffee => Option[(Option[Long], String, Int, Double, Grade, Int)] = <function1>val q1 = coffees.resultq1.statements.head//res0: String = select "COF_ID", "COF_NAME", "COF_SUP", "COF_PRICE", "COF_GRADE", "COF_TOTAL" from "COFFEES"val q2 = coffees.map(r => (r.id, r.name)).resultq2.statements.head//res1: String = select "COF_ID", "COF_NAME" from "COFFEES"val q3 = (for (c <- coffees) yield(c.id,c.name)).resultq3.statements.head//res2: String = select "COF_ID", "COF_NAME" from "COFFEES"val q = coffees.filter(_.price > 100.0).map(r => (r.id,r.name)).resultq.statements.head//res3: String = select "COF_ID", "COF_NAME" from "COFFEES" where "COF_PRICE" > 100.0val q4 = coffees.filter(_.price > 100.0).take(4).map(_.name).resultq4.statements.head//res4: String = select "COF_NAME" from "COFFEES" where "COF_PRICE" > 100.0 limit 4val q5 = coffees.sortBy(_.id.desc.nullsFirst).map(_.name).drop(3).resultq5.statements.head//res5: String = select "COF_NAME" from "COFFEES" order by "COF_ID" desc nulls first limit -1 offset 3val q6 = for {(c,s) <- coffees join suppliers on (_.supID === _.id)} yield(c.id,c.name,s.name)q6.result.statements.head//res6: String = select x2."COF_ID", x2."COF_NAME", x3."SUP_NAME" from "COFFEES" x2, "SUPPLIERS" x3 where x2."COF_SUP" = x3."SUP_ID"val q7 = for {c <- coffeess <- suppliers.filter(c.supID === _.id)} yield(c.id,c.name,s.name)q7.result.statements.head//res7: String = select x2."COF_ID", x2."COF_NAME", x3."SUP_NAME" from "COFFEES" x2, "SUPPLIERS" x3 where x2."COF_SUP" = x3."SUP_ID"coffees.map(_.price).max.result.statements.head//res10: String = select max("COF_PRICE") from "COFFEES"coffees.map(_.total).sum.result.statements.head//res11: String = select sum("COF_TOTAL") from "COFFEES"coffees.length.result.statements.head//res12: String = select count(1) from "COFFEES"coffees.filter(_.price > 100.0).exists.result.statements.head//res13: String = select exists(select "COF_TOTAL", "COF_NAME", "COF_SUP", "COF_ID", "COF_PRICE", "COF_GRADE" from "COFFEES" where "COF_PRICE" > 100.0)val qInsert = coffees += Coffee(Some(0),"American",101,56.0,Grade.Bestbuy,0)qInsert.statements.head//res14: String = insert into "COFFEES" ("COF_NAME","COF_SUP","COF_PRICE","COF_GRADE","COF_TOTAL")  values (?,?,?,?,?)val qInsert2 = coffees.map{r => (r.name, r.supID, r.price)} += ("Columbia",101,102.0)qInsert2.statements.head//res15: String = insert into "COFFEES" ("COF_NAME","COF_SUP","COF_PRICE")  values (?,?,?)val qInsert3 = (suppliers.map{r => (r.id,r.name)}).returning(suppliers.map(_.id)) += (101,"The Coffee Co.,")qInsert3.statements.head//res16: String = insert into "SUPPLIERS" ("SUP_NAME")  values (?)val qDelete = coffees.filter(_.price === 0.0).deleteqDelete.statements.head//res17: String = delete from "COFFEES" where "COFFEES"."COF_PRICE" = 0.0val qUpdate = for (c <- coffees if (c.name === "American")) yield c.priceqUpdate.update(10.0).statements.head//res18: String = update "COFFEES" set "COF_PRICE" = ? where "COFFEES"."COF_NAME" = 'American'val initSupAction = suppliers.schema.create andThen qInsert3val createCoffeeAction = coffees.schema.createval insertCoffeeAction = qInsert zip qInsert2val initSupAndCoffee = for {_ <- initSupAction_ <- createCoffeeAction(i1,i2) <- insertCoffeeAction} yield (i1,i2)//先选出所有ESPRESSO开头的coffee名称,然后逐个删除val delESAction = (for {ns <- coffees.filter(_.name.startsWith("ESPRESSO")).map(_.name).result_ <- DBIO.seq(ns.map(n => coffees.filter(_.name === n).delete): _*)} yield ()).transactionally//delESAction: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read with slick.dbio.Effect.Write with slick.dbio.Effect.Transactional] = CleanUpAction(AndThenAction(Vector(slick.driver.JdbcActionComponent$StartTransaction$@6e76c850, FlatMapAction(slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1@2005bce5,<function1>,scala.concurrent.impl.ExecutionContextImpl@245036ad))),<function1>,true,slick.dbio.DBIOAction$sameThreadExecutionContext$@294c4c1d)//对一个品种价格升10%def raisePriceAction(i: Long, np: Double, pc: Double) =(for(c <- coffees if (c.id === i)) yield c.price).update(np * pc)//raisePriceAction: raisePriceAction[](val i: Long,val np: Double,val pc: Double) => slick.driver.H2Driver.DriverAction[Int,slick.dbio.NoStream,slick.dbio.Effect.Write]//对所有价格<100的coffee加价val updatePriceAction = (for {ips <- coffees.filter(_.price < 100.0).map(r => (r.id, r.price)).result_ <- DBIO.seq{ips.map { ip => raisePriceAction(ip._1, ip._2, 110.0)}: _* }} yield()).transactionally//updatePriceAction: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read with slick.dbio.Effect.Write with slick.dbio.Effect.Transactional] = CleanUpAction(AndThenAction(Vector(slick.driver.JdbcActionComponent$StartTransaction$@6e76c850, FlatMapAction(slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1@49c8a41f,<function1>,scala.concurrent.impl.ExecutionContextImpl@245036ad))),<function1>,true,slick.dbio.DBIOAction$sameThreadExecutionContext$@294c4c1d)DBIO.successful(Supplier(Some(102),"Coffee Company","",None))//res19: slick.dbio.DBIOAction[Supplier,slick.dbio.NoStream,slick.dbio.Effect] = SuccessAction(Supplier(Some(102),Coffee Company,,None))DBIO.failed(new Exception("oh my god..."))//res20: slick.dbio.DBIOAction[Nothing,slick.dbio.NoStream,slick.dbio.Effect] = FailureAction(java.lang.Exception: oh my god...)//示范事后处理机制,不必理会功能的具体目的qInsert.andFinally(qDelete)//res21: slick.dbio.DBIOAction[Int,slick.dbio.NoStream,slick.dbio.Effect.Write with slick.dbio.Effect.Write] = slick.dbio.SynchronousDatabaseAction$$anon$6@1d46b337updatePriceAction.cleanUp ({ case Some(e) => initSupAction; DBIO.failed(new Exception("oh my..."))case _ => qInsert3},true)//res22: slick.dbio.DBIOAction[Unit,slick.dbio.NoStream,slick.dbio.Effect.Read with slick.dbio.Effect.Write with slick.dbio.Effect.Transactional with slick.dbio.Effect.Write] = CleanUpAction(CleanUpAction(AndThenAction(Vector(slick.driver.JdbcActionComponent$StartTransaction$@6e76c850, FlatMapAction(slick.driver.JdbcActionComponent$QueryActionExtensionMethodsImpl$$anon$1@1f7aad00,<function1>,scala.concurrent.impl.ExecutionContextImpl@245036ad))),<function1>,true,slick.dbio.DBIOAction$sameThreadExecutionContext$@294c4c1d),<function1>,true,scala.concurrent.impl.ExecutionContextImpl@245036ad)raisePriceAction(101,10.0,110.0).asTry//res23: slick.dbio.DBIOAction[scala.util.Try[Int],slick.dbio.NoStream,slick.dbio.Effect.Write] = slick.dbio.SynchronousDatabaseAction$$anon$9@60304a44import slick.jdbc.meta.MTableimport scala.concurrent.ExecutionContext.Implicits.globalimport scala.concurrent.duration.Durationimport scala.concurrent.{Await, Future}import scala.util.{Success,Failure}val db = Database.forURL("jdbc:h2:mem:test1;DB_CLOSE_DELAY=-1", driver="org.h2.Driver")def recreateCoffeeTable: Future[Unit] = {db.run(MTable.getTables("Coffees")).flatMap {case tables if tables.isEmpty => db.run(coffees.schema.create).andThen {case Success(_) => println("coffee table created")case Failure(e) => println(s"failed to create! ${e.getMessage}")}case _ => db.run((coffees.schema.drop andThen coffees.schema.create)).andThen {case Success(_) => println("coffee table recreated")case Failure(e) => println(s"failed to recreate! ${e.getMessage}")}}}}


























这篇关于浅谈Slick(3)- Slick201:从fp角度了解Slick的文章就介绍到这儿,希望我们推荐的文章对编程师们有所帮助!



http://www.chinasem.cn/article/887222

相关文章

关于数据埋点,你需要了解这些基本知识

产品汪每天都在和数据打交道,你知道数据来自哪里吗? 移动app端内的用户行为数据大多来自埋点,了解一些埋点知识,能和数据分析师、技术侃大山,参与到前期的数据采集,更重要是让最终的埋点数据能为我所用,否则可怜巴巴等上几个月是常有的事。   埋点类型 根据埋点方式,可以区分为: 手动埋点半自动埋点全自动埋点 秉承“任何事物都有两面性”的道理:自动程度高的,能解决通用统计,便于统一化管理,但个性化定

浅谈主机加固,六种有效的主机加固方法

在数字化时代,数据的价值不言而喻,但随之而来的安全威胁也日益严峻。从勒索病毒到内部泄露,企业的数据安全面临着前所未有的挑战。为了应对这些挑战,一种全新的主机加固解决方案应运而生。 MCK主机加固解决方案,采用先进的安全容器中间件技术,构建起一套内核级的纵深立体防护体系。这一体系突破了传统安全防护的局限,即使在管理员权限被恶意利用的情况下,也能确保服务器的安全稳定运行。 普适主机加固措施:

速了解MySQL 数据库不同存储引擎

快速了解MySQL 数据库不同存储引擎 MySQL 提供了多种存储引擎,每种存储引擎都有其特定的特性和适用场景。了解这些存储引擎的特性,有助于在设计数据库时做出合理的选择。以下是 MySQL 中几种常用存储引擎的详细介绍。 1. InnoDB 特点: 事务支持:InnoDB 是一个支持 ACID(原子性、一致性、隔离性、持久性)事务的存储引擎。行级锁:使用行级锁来提高并发性,减少锁竞争

浅谈PHP5中垃圾回收算法(Garbage Collection)的演化

前言 PHP是一门托管型语言,在PHP编程中程序员不需要手工处理内存资源的分配与释放(使用C编写PHP或Zend扩展除外),这就意味着PHP本身实现了垃圾回收机制(Garbage Collection)。现在如果去PHP官方网站(php.net)可以看到,目前PHP5的两个分支版本PHP5.2和PHP5.3是分别更新的,这是因为许多项目仍然使用5.2版本的PHP,而5.3版本对5.2并不是完

PHP: 深入了解一致性哈希

前言 随着memcache、redis以及其它一些内存K/V数据库的流行,一致性哈希也越来越被开发者所了解。因为这些内存K/V数据库大多不提供分布式支持(本文以redis为例),所以如果要提供多台redis server来提供服务的话,就需要解决如何将数据分散到redis server,并且在增减redis server时如何最大化的不令数据重新分布,这将是本文讨论的范畴。 取模算法 取模运

Weex入门教程之1,了解Weex

【资料合集】Weex Conf回顾集锦:讲义PDF+活动视频! PDF分享:链接:http://pan.baidu.com/s/1hr8RniG 密码:fa3j 官方教程:https://weex-project.io/cn/v-0.10/guide/index.html 用意 主要是介绍Weex,并未涉及开发方面,好让我们开始开发之前充分地了解Weex到底是个什么。 以下描述主要摘取于

Java了解相对较多!

我是对Java了解相对较多,而对C#则是因工作需要才去看了一下,C#跟Java在语法上非常相似,而最初让我比较困惑的就是委托、事件部分,相信大多数初学者也有类似的困惑。经过跟Java的对比学习,发现这其实跟Java的监听、事件是等同的,只是表述上不同罢了。   委托+事件是观察者模式的一个典型例子,所谓的委托其实就是观察者,它会关心某种事件,一旦这种事件被触发,这个观察者就会行动。   下

使用WebP解决网站加载速度问题,这些细节你需要了解

说到网页的图片格式,大家最常想到的可能是JPEG、PNG,毕竟这些老牌格式陪伴我们这么多年。然而,近几年,有一个格式悄悄崭露头角,那就是WebP。很多人可能听说过,但到底它好在哪?你的网站或者项目是不是也应该用WebP呢?别着急,今天咱们就来好好聊聊WebP这个图片格式的前世今生,以及它值不值得你花时间去用。 为什么会有WebP? 你有没有遇到过这样的情况?网页加载特别慢,尤其是那

初步了解VTK装配体

VTK还不太了解,根据资料, vtk.vtkAssembly 是 VTK库中的一个重要类,允许通过将多个vtkActor对象组合在一起来创建复杂的3D模型。 import vtkimport mathfrom vtk.util.colors import *filenames = ["cylinder.stl","sphere.stl","torus.stl"]dt = 1.0renW

浅谈java向上转型和乡下转型

首先学习每一种知识都需要弄明白这知识是用来干什么使用的 简单理解:当对象被创建时,它可以被传递给这些方法中的任何一个,这意味着它依次被向上转型为每一个接口,由于java中这个设计接口的模式,使得这项工作不需要程序员付出任何特别的努力。 向上转型的作用:1、为了能够向上转型为多个基类型(由此而带来的灵活性) 2、使用接口的第二个原因却是与使用抽象基类相同,防止客户端创建该类的对象,并确保这仅仅