• How To Use Hosted Nexus With Sbt

    If you work in an organization or doing your personal project sooner or latter you’re gonna need an artifact server. And few reasons this would be you want to host your build artifacts but not make those available to the public, you want to have a proxy server to use when you download your artifacts, this is particularly helpful when you have more than one person working on the project or you simply have some jars that aren’t in maven central or other public repos and you want to host those so you can list them as dependencies from your build tool.

    read more
  • Sbt Plugins I Like To Use

    Sbt is really awesome and there are plugins that make it even better. I thought I would write a post about the plugins I’m using and why I like to use those.

    read more
  • Continuous Integration For Scala Projects

    So what is continuous integration, at the very high level it’s about hey is the code in a good state, no matter by how many people code is being updated, how often code is being changed or of how many building blocks project consists of. When you start working on a feature you’re gonna branch of mainline and start introducing changes. At some point when you find things are broken now you have to spend time to see if it’s because of your changes or project was already in a bad state. And when you find out that it was already in a bad state you would want to start tracing. A who introduced the breaking changes and B when. So to avoid the head each and frustration what you really want to know is if project is in a good state before you even start working on it. One option would be to compile the code before you start changing it. But if code compiles is it enough? Oh you you also wanna know if unit tests are passing. But is this enough? Check for compiler warnings? Style violations? Code coverage? What if you run code coverage and it shows that it’s 85%, does it mean everything is good, what if code coverage was 95% last week and code without proper code coverage got committed and code coverage dropped (do you remember what code coverage was last week, can you even find out without going back in commits and running coverage reports on those commits.) are you sure you want to introduce your changes on top of those changes? Let’s make it even more complex, what if the project consists of many other projects and the project you’re working on brings all those dependencies tougher. Maybe you should checkout all those dependency projects and see if those are in a good state also. Let’s make it even more complex, what if the project isn’t just set of projects that get bundled into one big project. What if the whole code base consists of many micro services that talk to each other via some form of PRC. OK maybe you can manage to do all this to know if code is in a good state before you introduce your changes what about the point B I brought earlier about when code broke, because longer it was in that states longer it’s gonna take to trace the breaking change and longer to fix. Can you go and check every single project every time someone makes a commit to make sure that commit is good? Well actually you can, you just need a build server. So all those points I was bringing was about whatever your integration is it should be continuous and it’s continuous if it runs after somebody changes something (or maybe before is a better option?☺ I’ll get to this later.)

    read more
  • Simple Caching For Akka Actors

    Recently I needed to cache some lightweight data into actor’s local memory to not make unnecessary calls to third party service over the internet. I think what I came up with is a pretty simple implementation (hence I named the article simple caching in akka). But the idea can be expended with small changes if any to use it to cache actor messages. Let’s imagine the word of actors where we have one actor per user and we send messages to user actors to get some info back. Now when we say each user (actor) is responsible to cache it’s own data it means each actor will use even more memory so at some point we’ll need to scale out because we’ll run out of memory (not that we didn’t have that problem without cache at first place). But it’s a good thing that we know we need to scale out from the beginning because I didn’t mention about high availability yet, so we at least need two boxes.

    read more
  • Testing styles in ScalaTest

    Over time different libraries brought different testing styles, jUnit, xUnit, Rspec, ert. Nowadays since many people have different backgrounds and different taste when it comes to writing unit tests, most test libraries offer many testing styles and ScalaTest isn’t an exception. To me there are three groups of testing styles. One would the traditional jUnit, xUnit style where you define a test class and unit tests are methods in that test class, you also define setUp and tearDown methods that must be called before and after each unit test. Of course there are also setUp and tearDown versions that run once per test suit but I don’t want to talk more about it here. I must admit that this isn’t my favorite style and here is why. I like to unit test aggressively without skipping any business cases instead of just relying on code coverage and saying my other test already is hitting those lines. If I have a class with many methods I write many unit tests for each method, an example would be when a method takes a parameter and based on different values for those parameters it does different things, of course we want to have multiple unit tests for that method to cover all business cases for that method. So this alone groups unit tests in a test class by functions, another thing is that most likely unit tests for function A will share some setup logic. This testing style forces as to put setup code for all unit tests for function A in setup method, and when we have like 10 such functions, setup method will have the setup code for all those function which will make it long, hard to read, also it will be extremely hard to tell which setup code is for which unit test. Another option is to define setup code in each unit test with isn’t really a good idea since one we’ll have duplicated code and unit tests now will be longer than they need to be and hard to understand. I see some people like to solve this problem by moving this common code into private methods. Although this solves code duplication problem, also makes unit tests shorter but it introduced another big problem. When I read a unit test I want to know what the method is doing in 2 seconds, unit tests are documentations right. Now you come across this private method call, you have to find it’s implementation, read the ugly code there and jump back to continue reading what that unit test is trying to test. It’s a nightmare to me, I call that unit test anything but readable and I sure don’t want to be one maintain that code. Before I moved to next type of unit tests, I must mention what styles ScalaTest is offering here. FunSuite and FlatSpec, you can read more about those in ScalaTest documentation. Both are flat as mentioned, one is more like jUnit, other is more like xUnit.

    read more
  • Activator And My Activator Templates

    When activator just came out I wasn’t really excited about it. I’m more of a command line guy and using a web UI to help me build, test, run my code isn’t very useful to me. Activate also allows you to browse the code from web IDE. I usually use Vim, IntelliJ or GitHub for it, but I get the idea, for someone who doesn’t have his or her environment setup and maybe got a project that’s not even in github yet, web IDE can be helpful. I have to give more credit to activator though, because of couple really cool things. Recently my colleague who’s not a scala developer (and of course he doesn’t have his scala environment setup with latest sbt and all that) needed to run one of the projects we have in his local computer and asked me how to do it. And of course first thing that came to my mind was get the latest sbt and do sbt run. Then I remembered that I started that project from one of the activator templates I created, so it must have activator there. So he ran ./activator and magic! It went and downloaded needed sbt version and loaded sbt environment for him. And the reason I started my own activator templates at first place is because it solves another big problem for me.

    read more
  • Running Akka Applications

    With akka becoming more and more popular, akka kernel being deprecated, docker rocking the word, Typesafe releasing ConductR and more folks getting to scala, I wanted to write a post about how to run akka applications. Akka applications aren’t servlet applications java developers are used to, normally those are main class applications in a jar file and have other jars as dependencies. So it’s no-brainer that we can just use java command and no container like tomcat or jetty is need. I will cover how to run in development while iterating on a task. How to run it in development and tunnel the port and let others point to your code before you even go to production or pre production environment and how to run it inside a docker container in production and pre production environments. Unforunatly I didn’t get hold of ConductR to try it myself, so I won’t be writing about it in this post. Btw, most of this should be to relevant other scala/java main class applications not only akka and akka http applications.

    read more
  • Scalaz Semigroup, Monoid, Equal, Order, Enum, Show and Standard Scala Classes

    Scalaz has so many abstractions and higher kinds that model lot of algebraic structures, monads, hight level types to generalize common structures and offers lot of tools that work with those abstractions. In this article I wanted to cover the very basics. Ones we see every day in standard scala library. What the anomalies of basic types are in standard scala library, what are the comment things between them that can be made generic, how to use scalaz with standard classes defined in scala library.

    read more
  • Scalaz

    Scala is an object oriented language with functional aspects. Some people like the object oriented aspect, some are attracted by the functional part. Add to that the fact that it’s sitting on top of JVM and of course it’s influenced by Java a lot, we still have nulls, we still have exceptions, IO and state mutation, etc. And since Scala isn’t purely functional it still allows us to write imperative code, call a Java library that will throw an exception, etc. I personally think that functional code is easier to read and understand, unit test and maintain(not event talking about state and multiple threads). So when it comes to writing purely functional code (or as close to it as possible) Scala sure misses certain things and scalaz has a lot built to help. I decided to write series or posts to explain scalaz. I’ll start with the basics, most common abstractions then explain more real words cases. I’ll put the links to all posts here so it’s easy to navigate.

    read more
  • Methods Scalaz Adds To Standard Classes - Integer, String and Boolean

    In my previous article about Semigroup, Monoid, Equal, Order, Enum and Show. I covered what those are, what method they have and what derived and syntax sugar methods scalaz bring for them. Also I showed that there are implicit implementations of those for scala standard classes that once are brought into the scope you’ll have access to those. In this article I’m gonna show those methods in action for integer, string and boolean.

    read more
  • Scalaz Functor

    Functor is a mapping from type F[A] hight kind to F[B]. Scalaz defines Functor[F[_]] trait with map abstract method.

    def map[A, B](fa: F[A])(f: A  B): F[B]

    Having a higher kind A and A to B mapping we can get higher kind B.

    read more
  • Scalaz Apply

    Apply is Functor that also has apply method. Scalaz defines Apply[F[_]] trait with ap abstract method.

    def ap[A,B](fa:  F[A])(f:  F[A  B]): F[B]

    Having a higher kind A and hight kind of A to B mapping we can get higher kind B. And since Apply is also a functor we also have

    def map[A, B](fa: F[A])(f: A  B): F[B]
    read more
  • Open Sourcing Task Library For Queueing Reliable Task Execution

    Here at Ad Hoc Labs we are open-sourcing one of the projects I started to handle reliable execution of different tasks.

    I wanted to build a library that would:

    • have a higher level of abstraction than queuing
    • allow development of new types of tasks with only config changes
    • be back-end independent, so backends like RabbitMQ and Kafka would work based on config
    • allow flexibility to have publishers running in one project and consumers running in another
    • allow flexibility to decide which projects run what consumers and producers and how many of them.

    Our product Burner is a privacy layer for phone numbers. There are many things we want to run in a reliable manner, and we need to queue tasks and execute things in an asynchronous way. We are a Scala shop and among many Scala libraries we also use Akka.

    In the Akka world, we have actor A sending a message to actor B. There are many things that can go wrong, even if both are running in same JVM. Actor B may be down when actor A is sending the message. Actor B may fail to process the task, so there is a need for retries. Some tasks can take long time to process, so there is a need to run things asynchronously. You may want to scale out and have many instances of actor B, etc.

    This actor model is very clean and well-suited for this type of situation. And if you’re familiar with Akka, you know that Akka handles some of those cases out of the box, some via Akka persistence, and some things simply aren’t there.

    So I wanted to have a very simple library that wouldn’t require a Cassandra cluster, force you to write a lot of boilerplate, or force the use one specific message queue – I wanted it to be extremely flexible and extendable.

    The core idea is the same: actor A sends a task to actor B to process, and actor B will eventually get the task. It’s very similar to the way you send a message from actor A to actor B in Akka; there’s no other complexity there.

    read more
  • Scalaz Applicative

    Applicative is Apply that also has point or pure methods. Scalaz defines Applicative[F[_]] trait with point abstract method.

    def point[A](a: => A): F[A]

    point method lifts A to F[A]. Since Applicative is Apply it inherits all the methods that Apply offers plus it needs to provide implementation for map and ap methods. Note that Applicative implements map using point and ap methods:

    override def map[A, B](fa: F[A])(f: A => B): F[B] = ap(fa)(point(f))
    read more
  • Scalaz Bind

    Bind is an Apply that also has bind method. Scalaz defines Bind[F[_]] trait with bind abstract method.

    def bind[A, B](fa: F[A])(f: A => F[B]): F[B]

    Having F[A] and mapping A to F[A] it returns F[B].

    Since Bind is Apply it inherits all the methods that Apply offers plus it needs to provide implementation for map and ap methods. Note that Bind implements ap using bind and map methods:

    override def ap[A, B](fa: => F[A])(f: => F[A => B]): F[B] = {
        lazy val fa0 = fa
        bind(f)(map(fa0))
    }
    read more
  • Scalaz Cobind

    Cobind is a Functor that also has cobind method. Scalaz defines Cobind[F[_]] trait with cobind abstract method.

    def cobind[A, B](fa: F[A])(f: F[A] => B): F[B]

    Having F[A] and mapping F[A] to B it returns F[B]. Note that unlike bind method in Bind functor, cobind takes F[A] => B instead of A => F[B] function.

    Since we have cobind method we can define cojoin method bases on it. Unlike to join in Bind, it turns F[A] to F[F[A]] instead of F[F[A]] to F[A].

    def cojoin[A](fa: F[A]): F[F[A]] = cobind(fa)(fa => fa)
    read more
  • Scalaz Comonad

    Comonad is a Cobind that also has copoint method. Scalaz defines Comonad[F[_]] with copoint abstract method.

    def copoint[A](p: F[A]): A

    Note that when point method of Monad turns A to F[A], copoint method of Comonad turns F[A] to A.

    read more
  • Scalaz Foldable

    Scalaz provides foldable to define all fold operations. It has Foldable[F[_]] trait with two foldMap and foldRight abstract methods:

    def foldMap[A,B](fa: F[A])(f: A  B)(implicit F: Monoid[B]): B
    def foldRight[A, B](fa: F[A], z:  B)(f: (A, => B) => B): B
    read more
  • Scalaz Monad

    Monad is Applicative and Bind. With that we deal with:

    def ap[A,B](fa:  F[A])(f:  F[A  B]): F[B]
    def map[A, B](fa: F[A])(f: A  B): F[B]

    from Apply

    def point[A](a: => A): F[A]
    override def map[A, B](fa: F[A])(f: A => B): F[B] = ap(fa)(point(f))

    from Applicative

    def bind[A, B](fa: F[A])(f: A => F[B]): F[B]
    override def ap[A, B](fa: => F[A])(f: => F[A => B]): F[B] = bind(f)(map(fa))

    from Bind.

    Note that in Applicative, map method is implemented using ap method and in Bind, ap is implemented using bind method. With that, two methods that are left to be implemented for a Monad are point and bind.

    read more
  • Scalaz Plus, PlusEmpty, IsEmpty, ApplicativePlus and MonadPlus

    read more
  • Ammonite Modules

    Ammonite is such a great project, especially Ammonite-REPL. It brings many great things to scala console: syntax highlighting, better editing, dynamically loading dependencies, loading scripts, etc. It comes very handy if you want to play with a some library without creating an sbt project. Or you want to try something out without necessarily loading IntelliJ. I happen to do it often and decided to start an open source project called ammonite-modules. The idea is to have pre-defined modules you can load in Ammonite Repl. Modules will have all the dependencies needed at latest versions and do some basic setup.

    read more
  • Akka Http Request Throttling

    Akka Http is a great framework for rest microservices and here at Ad Hoc Labs we’re slowly migrating from Spray to Akka Http. Since we use Spray extensively we ended up having utility classes to complement some functionality here and there or shared in a common jar with other utility classes. With Akka Http we want to keep things little more clean and reduce boilerplate with high level constructs. We decided to start a new project named akka-http-contrib that will host all akka http functionality that we’re building that’s not already in akka-http project. We recently open sourced it since other community members will find it helpful and hopefully will contribute also.

    First functionality we needed for a new project we started with Akka Http was to introduce throttling for some endpoints. We didn’t want to write boilerplate for each endpoint we want to introduce throttling for but instead we wanted to be able to introduce throttling for any endpoint by adding it to typesafe config. Basically we wanted a directive that will wrap the routes and when request comes in, depending on the config and thresholds, either let the route execute or complete the request with 429 - Too Many Requests and prevent the route from being executed. This at high level, which should handle most of the use cases but we also wanted to have it extensible enough that whenever there is a special usecase it still can be handled by writing little code.

    read more
  • Writing Aws Lambdas In Scala

    AWS Lambda supports writing handlers in Nodejs, Python and Java. The first two are really easy - if you don’t have any external dependencies and script is easy to write, you can even write your lambda in AWS console. However when it comes to Java, things get a little more complicated than that (not even talking about Scala yet.) So at the very least you’ll need to create a new project, know what jar dependencies to add for lambda, implement the handler method in a class that potentially implements some interface, know how to generate a deployment package (fat jar or a zip file for java projects), upload to S3 (since that jar would have a decent size), etc. If I want to implement my lambda in Scala now, I would have to deal with all those complexities for Java projects and more - such as more dependencies, how to write an actual Scala code (not Java in Scala syntax), interoperability, immutability, built in features like case classes, futures, etc. I thought it’s not fair for Scala developers and things can be much simpler (maybe not Nodejs simple) but at least it has to be simpler than doing in Java, since we all know Scala has more things to make developer’s job easier.

    read more
  • Moving My Activator Templates To Gitter8

    I had few activater templates for a while now that I use to start different types of Scala projects. I open sourced those and made available via Github and activator website. There was lots of traction from the community and since Activator is deprecated now I decided to take those and convert to Gitter8 templates so sbt new can be used to start projects. The new templates work with sbt 1 and all the dependencies, including sbt plugins in the projects, are upgraded.

    read more