I like to use docker to run my server side apps and my node apps aren’t
exception. Decided to show the docker file I use for my node apps and explain
what I’m doing there. I copy the entire source, having the files I want to
exclude in docker ignore file. The most important thing to note is I don’t copy
node_modules directories, I do fresh install in docker. Main reason is that
docker container is kinda different operating system and potentially a different
distro (Debian in this instance, because I love it!) than dev or CI box.
So why not build the code where it’s gonna run and if there are issues I’ll
know about it sooner. Also note that I run npm install with production
switch so it doesn’t install dev dependencies, which I don’t really need to run
my app.
In my previous article about Semigroup, Monoid, Equal, Order, Enum
and Show. I covered what those are, what method they have and what derived
and syntax sugar methods scalaz bring for them. Also I showed that there are
implicit implementations of those for scala standard classes that once are
brought into the scope you’ll have access to those. In this article I’m gonna
show those methods in action for integer, string and boolean.
Scala is an object oriented language with functional aspects. Some people
like the object oriented aspect, some are attracted by the functional part.
Add to that the fact that it’s sitting on top of JVM and of course it’s
influenced by Java a lot, we still have nulls, we still have exceptions,
IO and state mutation, etc. And since Scala isn’t purely functional it still
allows us to write imperative code, call a Java library that will throw an
exception, etc. I personally think that functional code is easier to read and
understand, unit test and maintain(not event talking about state and multiple
threads). So when it comes to writing purely functional code (or as close to it
as possible) Scala sure misses certain things and scalaz has a lot built to
help. I decided to write series or posts to explain scalaz. I’ll start with the
basics, most common abstractions then explain more real words cases.
I’ll put the links to all posts here so it’s easy to navigate.
Scalaz has so many abstractions and higher kinds that model lot of
algebraic structures, monads, hight level types to generalize common structures
and offers lot of tools that work with those abstractions.
In this article I wanted to cover the very basics. Ones we see every day
in standard scala library. What the anomalies of basic types are in standard
scala library, what are the comment things between them that can be made
generic, how to use scalaz with standard classes defined in scala library.
With akka becoming more and more popular, akka kernel being deprecated,
docker rocking the word, Typesafe releasing ConductR and more folks
getting to scala, I wanted to write a post about how to run akka applications.
Akka applications aren’t servlet applications java developers are used to,
normally those are main class applications in a jar file and have other jars as
dependencies. So it’s no-brainer that we can just use java command and
no container like tomcat or jetty is need. I will cover how to run in
development while iterating on a task. How to run it in development and tunnel
the port and let others point to your code before you even go to production
or pre production environment and how to run it inside a docker container in
production and pre production environments. Unforunatly I didn’t get hold of
ConductR to try it myself, so I won’t be writing about it in
this post. Btw, most of this should be to relevant other scala/java main class
applications not only akka and akka http applications.
When activator just came out I wasn’t really excited about it. I’m more of a
command line guy and using a web UI to help me build, test, run my code isn’t
very useful to me. Activate also allows you to browse the code from web IDE. I
usually use Vim, IntelliJ or GitHub for it, but I get the idea, for someone who
doesn’t have his or her environment setup and maybe got a project that’s not
even in github yet, web IDE can be helpful. I have to give more credit to
activator though, because of couple really cool things. Recently my colleague
who’s not a scala developer (and of course he doesn’t have his scala
environment setup with latest sbt and all that) needed to run one of the
projects we have in his local computer and asked me how to do it.
And of course first thing that came to my mind was get the latest sbt and do sbt
run. Then I remembered that I started that project from one of the activator
templates I created, so it must have activator there. So he ran ./activator
and magic! It went and downloaded needed sbt version and loaded sbt environment
for him. And the reason I started my own activator templates at first place is
because it solves another big problem for me.
Over time different libraries brought different testing styles, jUnit, xUnit,
Rspec, ert. Nowadays since many people have different backgrounds and
different taste when it comes to writing unit tests, most test libraries
offer many testing styles and ScalaTest isn’t an exception. To me there are
three groups of testing styles. One would the traditional jUnit, xUnit
style where you define a test class and unit tests are methods in that test
class, you also define setUp and tearDown methods that must be called
before and after each unit test. Of course there are also setUp and tearDown
versions that run once per test suit but I don’t want to talk more about it here.
I must admit that this isn’t my favorite style and here is why. I like to unit
test aggressively without skipping any business cases instead of just
relying on code coverage and saying my other test already is hitting those
lines. If I have a class with many methods I write many unit tests for each
method, an example would be when a method takes a parameter and based on
different values for those parameters it does different things, of course we
want to have multiple unit tests for that method to cover all business cases
for that method. So this alone groups unit tests in a test class by functions,
another thing is that most likely unit tests for function A will share some
setup logic. This testing style forces as to put setup code for all unit tests for function A in
setup method, and when we have like 10 such functions, setup method will have
the setup code for all those function which will make it long, hard to read,
also it will be extremely hard to tell which setup code is for which unit test.
Another option is to define setup code in each unit test with isn’t really a
good idea since one we’ll have duplicated code and unit tests now will be
longer than they need to be and hard to understand. I see some people like to
solve this problem by moving this common code into private methods. Although this
solves code duplication problem, also makes unit tests shorter but it introduced
another big problem. When I read a unit test I want to know what the method is
doing in 2 seconds, unit tests are documentations right. Now you come across
this private method call, you have to find it’s implementation, read the ugly
code there and jump back to continue reading what that unit test is trying to
test. It’s a nightmare to me, I call that unit test anything but readable and
I sure don’t want to be one maintain that code.
Before I moved to next type of unit tests, I must mention what styles ScalaTest
is offering here. FunSuite and FlatSpec, you can read more about those in
ScalaTest documentation. Both are flat as mentioned, one is more like jUnit,
other is more like xUnit.
Recently I needed to cache some lightweight data into actor’s local memory to
not make unnecessary calls to third party service over the internet. I think what I
came up with is a pretty simple implementation (hence I named the article
simple caching in akka). But the idea can be expended with small changes
if any to use it to cache actor messages. Let’s imagine the word of actors where
we have one actor per user and we send messages to user actors to get some info
back. Now when we say each user (actor) is responsible to cache it’s own data
it means each actor will use even more memory so at some point we’ll need to
scale out because we’ll run out of memory (not that we didn’t have that
problem without cache at first place). But it’s a good thing that we know we
need to scale out from the beginning because I didn’t mention about high
availability yet, so we at least need two boxes.