• Linting Docker Files With From Latest Tool

    Recently folks at Replicated created an oper source tool called FROM:latest, which is a linting tool for docker files. You put the content of your docker file and it tries to find issues you have in it. Such a cool idea, I never came across a tool like this for docker files before. I decided to try it with the docker files I wrote in past to find potential issues in them and it actually found few issues.

    read more
  • Scalaz Applicative

    Applicative is Apply that also has point or pure methods. Scalaz defines Applicative[F[_]] trait with point abstract method.

    def point[A](a: => A): F[A]

    point method lifts A to F[A]. Since Applicative is Apply it inherits all the methods that Apply offers plus it needs to provide implementation for map and ap methods. Note that Applicative implements map using point and ap methods:

    override def map[A, B](fa: F[A])(f: A => B): F[B] = ap(fa)(point(f))
    read more
  • Open Sourcing Task Library For Queueing Reliable Task Execution

    Here at Ad Hoc Labs we are open-sourcing one of the projects I started to handle reliable execution of different tasks.

    I wanted to build a library that would:

    • have a higher level of abstraction than queuing
    • allow development of new types of tasks with only config changes
    • be back-end independent, so backends like RabbitMQ and Kafka would work based on config
    • allow flexibility to have publishers running in one project and consumers running in another
    • allow flexibility to decide which projects run what consumers and producers and how many of them.

    Our product Burner is a privacy layer for phone numbers. There are many things we want to run in a reliable manner, and we need to queue tasks and execute things in an asynchronous way. We are a Scala shop and among many Scala libraries we also use Akka.

    In the Akka world, we have actor A sending a message to actor B. There are many things that can go wrong, even if both are running in same JVM. Actor B may be down when actor A is sending the message. Actor B may fail to process the task, so there is a need for retries. Some tasks can take long time to process, so there is a need to run things asynchronously. You may want to scale out and have many instances of actor B, etc.

    This actor model is very clean and well-suited for this type of situation. And if you’re familiar with Akka, you know that Akka handles some of those cases out of the box, some via Akka persistence, and some things simply aren’t there.

    So I wanted to have a very simple library that wouldn’t require a Cassandra cluster, force you to write a lot of boilerplate, or force the use one specific message queue – I wanted it to be extremely flexible and extendable.

    The core idea is the same: actor A sends a task to actor B to process, and actor B will eventually get the task. It’s very similar to the way you send a message from actor A to actor B in Akka; there’s no other complexity there.

    read more
  • Installing Openvpn In Osmc

    I was running Raspbmc in Raspberry Pi which I like to use once a year (but I’m not gonna write about that part). What’s interesting about that device is that I always have it on and it’s connected to my router, so it makes it a great candidate for installing vpn server in it. And instead of getting another Pi to run vpn server and keep it up and running 24/7, it’s a good idea to just install OpenVpn on it also since it’s built on top of linux and that’s what I had. There have been lot of changes to the project and it also changed it’s name to OSMc. So I just decided to reformat, get OSMC and install OpenVpn on it again. Since I didn’t provision that server with chef (would be interesting to see someone run chef client in Pi though) I had to do everything manually again. Decided to write about it in case others would be interested in creating similar setup also. And since OSMC is built on top of Debian, this guide mostly applies to any Debian based setup.

    read more
  • Starting To Use Editorconfig

    I have known about EditorConfig for a while now but only recently decided to use it. It tells your editor to use tabs or spaces, indent size, tab width, what the end of the line should be, charset, whether to trim trailing whitespaces, add or not new lines at the end of the file. And all this can be configured per file extension(s) and per project. It supports pretty much all editors and IDEs out there, either they support it naively or there is a plugin for it. I’m mostly using Vim so I went ahead and installed the plugin for Vim. Most of the configuration it supports I honestly already have in my vim config (partially that’s why I wasn’t in rush to start using it). The only thing I needed it to solve for me was to remove trailing whitespaces and add a new line for scala a java files (second one btw didn’t work for me in Vim, not sure what the deal is there.) So obviously it does very little for me, the real benefit of this project though is when other people are also working on the same project. So you end up with consistent style and this kind of things I always appreciate.

    read more
  • Continuous Integration For Node Projects

    A while ago I wrote a post about continuous integration with Jenkins for scala projects and I thought I would do one for node developers also. Make sure to check out my other posts about how to write your gulp files since we’ll heavily really on those gulp tasks. I will put the links at the end of this post. So what is continuous integration, at the very high level it’s about hey is the code in a good state, no matter by how many people code is being updated, how often code is being changed or of how many building blocks project consists of. When you start working on a feature you’re gonna branch of mainline and start introducing changes. At some point when you find things are broken now you have to spend time to see if it’s because of your changes or project was already in a bad state. And when you find out that it was already in a bad state you would want to start tracing. A who introduced the breaking changes and B when. So to avoid the head each and frustration what you really want to know is if project is in a good state before you even start working on it. One option would be to compile the code before you start changing it. But if code compiles is it enough? Oh you you also wanna know if unit tests are passing. But is this enough? Check for compiler warnings? Style violations? Code coverage? What if you run code coverage and it shows that it’s 85%, does it mean everything is good, what if code coverage was 95% last week and code without proper code coverage got committed and code coverage dropped (do you remember what code coverage was last week, can you even find out without going back in commits and running coverage reports on those commits.) are you sure you want to introduce your changes on top of those changes? Let’s make it even more complex, what if the project consists of many other projects and the project you’re working on brings all those dependencies tougher. Maybe you should checkout all those dependency projects and see if those are in a good state also. Let’s make it even more complex, what if the project isn’t just set of projects that get bundled into one big project. What if the whole code base consists of many micro services that talk to each other via some form of PRC. OK maybe you can manage to do all this to know if code is in a good state before you introduce your changes what about the point B I brought earlier about when code broke, because longer it was in that states longer it’s gonna take to trace the breaking change and longer to fix. Can you go and check every single project every time someone makes a commit to make sure that commit is good? Well actually you can, you just need a build server. So all those points I was bringing was about whatever your integration is it should be continuous and it’s continuous if it runs after somebody changes something (or maybe before is a better option?☺ I’ll get to this later.)

    read more
  • My Gulp Files

    Recently I decided to upgrade to Gulp 4 even though it’s not released yet. And since there are some breaking changes I had to go over my gulp files. I usually do three type of projects in node/js, rest apis, library projects (npm packages), ionic and angular apps. I decided to write blog posts and share my gulp files in case other will find it useful. I do like to use the most loved and hated language in node world, CoffeeScript. I sure give it some love here, not because I find JavaScript hard or don’t know how to properly scope my variables or make objects, but because I find code written in CoffeScript much more readable and maintainable (yep, all that brackets and parentheses.) Other libraries I religiously use are Chai and Mocka for unit testing, Istanbul for code coverage, Sinon and Proxyquire for mocking, Jade for templates, Sass for stylesheets and of course Jenkins for continuous integration. So this combination on it’s own makes my gulp files pretty unique. Listing links to individual posts where I have my gulp file and I’m explaining which task is doing what.

    read more
  • My Gulp Files: Rest Api Projects

    For this type of projects I use CoffeScript, Chai and Mocka for unit testing, Istanbul for code coverage, Sinon and Proxyquire for mocking, Jenkins for continuous integration. This type of projects are easy to build because those don’t have front-end components, compilation/optimization, browser testing, live reload etc. And for server side code we don’t need to compile CoffeeScript to JavaScript since it’s running on server and we can use coffee executable to run the code instead of node executable. I also use CoffeScript for my gulpfile since it makes it much more clean and compact. Some people who do that like to also create gulpfile.js file with

    read more
  • My Gulp Files: Library Projects

    For this type of projects I use CoffeScript, Chai and Mocka for unit testing, Istanbul for code coverage, Sinon and Proxyquire for mocking, Jenkins for continuous integration. This type of projects are easy to build because those don’t have front-end components, compilation/optimization, browser testing, live reload etc. Here we do need to compile CoffeScript to JavaScript since we’re gonna publish the package with JavaScript code and not CoffeScript. What I do is keep my coffee code in src directory instead of lib. At compile time I generate JavaScript code into lib directory, add src directory to .npmignore and lib directory to .gitignore. That way my CoffeeScript ends up in git repo but not generated JavaScript and my JavaScript ends up in published package and not JavaScript, pretty smart right :). I also use CoffeScript for my gulpfile since it makes it much more clean and compact. Some people who do that like to also create gulpfile.js file with

    read more
  • My Gulp Files: Ionic App Projects

    For Ionic projects also I use CoffeeScript, Sass, Jade although ionic generates the project in JavaScript and Html. Here we have front-end components, compilation, browser testing, live reload, and all that goodness. Here we do need to compile CoffeScript to JavaScript, Sass to Css and Jade to Html. Another thing that needs to be solved since browser will run JavaScript, Css and html instead of CoffeScript, Jade, Sass so debugging will be really challenging. To solved that problem I generate source map files. I also use CoffeScript for my gulpfile since it makes it much more clean and compact. Some people who do that like to also create gulpfile.js file with

    read more