Slightly Better reloaded

September 20, 2017 Meta, Ratelware No comments

Hello and welcome!

Recently I left my former company (Nokia), and founded my own startup, Ratelware (you can find it @ ratelware.com – or will be able to find it soon, since I haven’t posted the site yet).

This means that I’ll most probably have some time to write posts. I’ll do my best to write at least one per week, but time will tell if I’ll be able to make it.

This also means that the character of the blog will change a bit – it’ll still be mostly problem-oriented, but occasionally I’ll also include some info about big changes in tools managed by Ratelware (some major releases or sth).

A minor change is that one of the old addresses – slightlybetter.pl – will soon become obsolete and the blog will no longer be available under it. From now on you should use slightlybetter.eu

We’re gonna start next week with an instruction for setting up Slick with PostgreSQL. Particularly, we’re going to focus on automatic generation of tables which contain enumeration-type values.

Stay tuned!

Hiatus

November 14, 2016 Meta 1 comment

Hey there,

sorry for the lack of updates in last week. I got kinda overwhelmed with the life and didn’t manage to keep the blog updated. Unfortunately, the reasons are still there, so most probably I won’t be able to get back to regular updates before end of January – until then, the blog will be on hiatus.

Sorry for the inconvenience – if you feel like it’s a loss, feel free to leave a comment 😉

Adding GHCJS to equation

September 26, 2016 GHCJS, Haskell, Yesod No comments

In this post we’ll try to remove the JS-ey part of our application – the Julius templates. Before we move on to the actual implementation, I have to warn you: GHCJS is not yet a fully mature project and installing it on Windows platform is a pain. I wasted nearly two days trying to do it using stack, mostly due to a problem with locale – The Internet said it’s already solved, but for some reason it didn’t work. Only the solution described in https://github.com/commercialhaskell/stack/issues/1448 worked – setting non-UTF application locale to en_US) – still, that’s not all. I also had to install pkg-config (this StackOverflow question is about “how to do it”) and nodejs, and that’s still not all – older versions of GHCJS (for example one proposed on Stack GHCJS page) have some trouble with Cabal. In the end, I had to install GHCJS from sources on Windows. Surprisingly, it went without serious problems.

On Linux machines it also requires nasty hacks – like symbolic link from node to nodejs. During standalone installation this can be solved by --with-node nodejs flag, but not on stack installation (unless it’s somehow configurable in stack.yaml, which I’m not aware of).
Installation takes really long – I suppose it was over an hour and downloaded like, half of The Internet.

Anyway, since I’ve managed to do it, you should be able to install it too (if you’re not – write in comments, maybe I can help?), so let’s go to teh codez!
First let’s create the GHCJS project, as a subproject to our main one. Go to main project directory and run stack new ghcjs. That created some files, but – surprisingly – not related to GHCJS in any way. To request compiling it with GHCJS, you have to add the following code to your stack.yaml:

resolver: lts-6.18
compiler: ghcjs-0.2.0.9006015_ghc-7.10.3
compiler-check: match-exact
setup-info:
  ghcjs:
    source:
      ghcjs-0.2.0.9006015_ghc-7.10.3:
        url: "https://tolysz.org/ghcjs/lts-6.15-9006015.tar.gz"
        sha1: 4d513006622bf428a3c983ca927837e3d14ab687

If you are wondering where did I get these paths from (and you should be – never trust an unknown blogger asking you to install some arbitrary packages!), it’s from a ghcjs-dom GitHub issue.

After that run stack setup and spend some time solving the problems (it takes some time, not counting really long installation). For example, if you’re using Windows, you won’t be able to set up environment, because this package uses a particular resolver – lts-6.15, which has a built-in version 0.8.18.4 of yaml package, which cannot be built on Windows, because it contains non-ASCII characters in the path (its this problem). Seriously, that’s one of weirder problems I’ve encountered. Bad news is that it’s not possible to elegantly solve it. I changed manually the downloaded package to use lts-6.18 resolver, which works fine. If you choose this solution, remember to remove sha1 from stack.yaml. Also remember that creating tar.gz files on Windows work a bit differently than on Linux and you may have trouble during repacking (luckily, 7-Zip offers an in-place change feature, which solves this issue).

And voila – we’re ready to code!
First, let’s check whether the current version (which – by default – does only some printing to the console) works with Yesod.
To do that, first compile the GHCJS project (simply stack build), then copy all.js file from .stack-work/install/[some-hash]/bin/*.jsexe to a new directory, static/js. Then we need to includ this file – this is a little cumbersome in Yesod, but adds to type-safety. In desired handler (in our case Project.hs) you have to add addScript $ StaticR js_all_js to implementation of renderPage. The way of inclusion is a bit weird (substituting all slashes and dots with underscores), but guarantees compile-time safety (if the file does not exist, the app will not compile), which is good.

After these changes, run stack exec -- yesod devel. Here’s a fun fact: most probably it won’t compile, saying that js_all_js is not in scope. Thanks to this StackOverflow question we can detect the problem – Yesod didn’t recognize a new static file (identifiers are automatically generated for them). You need to either change or touch Settings/StaticFiles.hs, and after recompilation it’ll work.

Now run the server and open a project. And yay, it works! You can verify it by someFunc print in the JavaScript console. So far so good, now we have to find a way to perform the same action we did in project.julius. There are three steps to perform:

  1. Hook on #delete-project button
  2. onClick – issue a DELETE request
  3. after a response is received – redirect user to site of response

In JavaScript it was really simple, how will it look like in Haskell? Let’s find out!
First thing we need to do is adding ghcjs-dom to our dependencies: extra-deps: [ghcjs-dom-0.3.1.0, ghcjs-dom-jsffi-0.3.1.0] to stack.yaml (FFI is also required) and to .cabal file. Turns out we also have to add ghcjs-base-0.2.0.0 to stack.yaml, but apparently it’s not a published package… luckily, stack can address this. To packages field in stack.yaml add:

- location:
   git: https://github.com/ghcjs/ghcjs-base.git
   commit: 552da6d30fd25857a2ee8eb995d01fd4ae606d22

this will fetch the latest (as of today) commit from ghc-js. This is version 0.2.0.0, so it’ll compile now, right? Right. But wait we didn’t code anything yet! Oh dear, let’s fix it now.

Open the ghcjs/src/Lib.hs file and write:

module Lib (setupClickHandlers) where

import GHCJS.DOM
import GHCJS.DOM (runWebGUI, webViewGetDomDocument)
import GHCJS.DOM.HTMLButtonElement (castToHTMLButtonElement)
import GHCJS.DOM.Document (getElementById)
import GHCJS.DOM.EventTarget
import GHCJS.DOM.EventTargetClosures
import GHCJS.DOM.Types

setupClickHandlers :: IO()
setupClickHandlers = do 
  runWebGUI $ \webView -> do
    Just doc <- webViewGetDomDocument webView
    Just element <- getElementById doc "delete-project"
    button <- castToHTMLButtonElement element
    clickCallback <- eventListenerNew printNow
    addEventListener button "click" (Just clickCallback) False

printNow :: MouseEvent -> IO()
printNow _ = print "Clicked!"

We won’t get through all the imports (I was heavily inspired by some GitHub examples) – it is possible that some of them aren’t necessary, and others can definitely be scoped. Nevertheless, I think that going through the code will be hard enough, so let’s ignore the details for now.

First call, runWebGUI, is responsible for setting our code in proper context. It calls given function (a lambda, in our case) with the GUI view. It’s pretty cool that you can use exactly same code for both the browser and native GTK apps (with proper linkage, obviously). Then we extract the document DOM from the GUI and desired button from the document. In the next line, we create a callback (from function defined a few lines lower), and attach it to "click" event of our button. The syntax for the listener might seem a bit weird, so let’s take a look at the signature and definition:

addEventListener ::
                 (MonadIO m, IsEventTarget self, ToJSString type') =>
                   self -> type' -> Maybe EventListener -> Bool -> m ()
addEventListener self type' listener useCapture

The first two arguments – event target (button) and event type (click) are fairly intuitive, but why is EventListener a Maybe, and what is useCapture? useCapture is a parameter controlling the way of event propagation. It’s explained in more detail here (link from ghcjs-jsffi source). Unfortunately, I still do not know why is EventListener a Maybe – possibly to allow change of event propagation without any actual handler? If you have an idea, let me know in the comments!

You also need to call this function (instead of someFunc) in app/Main.hs. Then compile, copy all.js to static/js (just like before) and remove templates/project.julius file. Now, be careful, this might hurt a little: yesod devel alone won’t spot that you’ve removed the file, so you’ll get 500: Internal Server Error when project.julius should be used.

The current version implements 1st point from our checklist – we’ve added a hook on #delete-project button. Now is the time for some bad news – we won’t be able to easily use Yesod’s typesafe routes. Obviously, we can work around it by generating an interface file – but that’s another level of infrastructure we need to build. That’s why we’ll leave typesafe routes for now and stick with typical strings.

With that knowledge, let’s implement the AJAX call:

data DeletionResponse = DeletionResponse { target :: String } deriving (Generic, Show)
instance ToJSON DeletionResponse where
  toEncoding = genericToEncoding defaultOptions
instance FromJSON DeletionResponse

makeRequest :: String -> Request
makeRequest projectId = Request {
  reqMethod = DELETE,
  reqURI = pack $ "/project/" ++ projectId,
  reqLogin = Nothing,
  reqHeaders = [],
  reqWithCredentials = False,
  reqData = NoData
}

requestProjectDeletion :: String -> IO (Response String)
requestProjectDeletion projectId = xhrString $ makeRequest projectId

deleteProject :: MouseEvent -> IO()
deleteProject _ = do 
  currentLocation <- getWindowLocation
  currentHref <- getHref currentLocation
  response <- requestProjectDeletion $ unpack $ extractLastPiece currentHref
  redirect currentLocation currentHref response
  where
    redirect currentLocation oldHref resp = setHref (pack $ getTarget oldHref resp) currentLocation
    extractLastPiece href = last $ splitOn' (pack "/") href
    getTarget fallback resp = maybe (unpack fallback) target $ ((BS.pack `fmap` contents resp) >>= decode

I’ve also added a few imports (Data.JSString, JavaScript.Web.XMLHttpRequest, JavaScript.Web.Location, GHC.Generics, Data.Aeson and Data.ByteString.Lazy.Char8 as BS) and a language extension – DeriveGeneric, for automatic generation of Aeson serialization/deserialization routines. Of course, this required changes in cabal file as well (aeson and bytestring dependencies). deleteProject becomes our new clickCallback and it works the same way as earlier again!

Now, that’s quite a lot of code, so let’s go through it and examine what happens there.
We start with definition for our response data type – to be honest, it’s not really necessary, and we could’ve just extracted the target field from the response JSON, without intermediate Haskell objects. If GHCJS offers some helpers to do that (with Aeson alone it wouldn’t be much simpler), it could spare us a few packs and unpacks.
Next we create a HTTP request – only variable part here is URI, determined by projectId. An important thing to note is that this code works slightly different thatn the previous version (in Julius) – we only have one code and it dynamically determines routes – previously a separate code was generated and sent with each page, which could add to delays (if it got bigger). Files generated by GHCJS are fairly big (hundreds of kBs), so we can’t really afford sending dozens of them to each user – network bandwidth might be cheaper than earlier, but it won’t be cheap enough to simply throw the throughput away.
Fun fact: the first time I’ve implemented it in a way that it compiled, I mistyped the route. Lack of static typing for routes is quite sad, but probably solvable with some work.
And then, after a short AJAX wrapper, we have the main event listener – deleteProject. It starts with determining the current path, for two reasons – first, that’s the location to set if something goes wrong (“no change”), and second – to determine ID of project. While it works now, it poses several threats. First of all, if two teams work separately on frontend and backend, at some point the route will change (probably without notice) and this mechanism will break. This can be of course prevented with thorough testing and strict processes, but there is also second problem – no URL minifiers will work. While this might not be a problem, it may become one when you switch to MongoDB identifiers.
Next line might be one of the most interesting features of Haskell in asynchronous applications. Due to lazy I/O, we can request redirection to be performed “when data is ready” (after response is received – response is required to proceed here). That’s a really nice solution compared to chains of promises (which are also really nice compared to typical Node.js callbacks), which doesn’t break the code flow but – at the same time – performs implicit waits wherever needed.

The time has come for some final thoughts – after all, we managed to implement the same (or at least very similar), simple functionality using GHCJS and Julius. From my perspective, using GHCJS for simple scripts is a vast overkill – JavaScript is sufficient for it, and if you want more type safety, choose TypeScript (it is also supported in the Shakespearean family). You get out-of-the-box integration with Yesod, a simple type system and route interpolation. That might not be much, but remember that right now we’re aiming for rather simple scripts. And hey – people write *big* apps in JavaScript with no types, so a few lines are not a tragedy (if required).
As for GHCJS – it’s a powerful and promising tool, but still very immature. It’s targets are ambitious, but for now it simply isn’t usable (at least on Windows) – that’s simply unacceptable for an installation from packages to take over two days and require to look through dozens of GitHub issues. Installation from sources might be more convenient, but I expect a mature tool to provide an installable package (even if all the package does is instrumenting the entire compilation locally). And more importantly – if it provides any package, it should work out-of-the-box (regardless if it’s old or new – assuming it’s the newest one, older may have bugs). Right now start-up overhead is simply too big to be acceptable, at least for me (over a half of this post is just about setting up GHCJS!). Programming in it is quite nice, but documentation is ultra-sparse, and most of the stuff has to be looked up in the source – that’s also not what I’d expect from a mature tool. Nevertheless, GHCJS caught my attention and I’ll definitely take a closer look at it again in several months. Maybe then it’ll be possible to apply it to some bigger project (for small ones infrastructure costs – setups, installations etc. – are much too high for me).

Looks like I’ll have to look for a different tool for frontend development (assuming I’m not happy with interpolated JavaScript/TypeScript/CoffeeScript, which is true). I’m going to consider Elm as the next tool – while it’s not exactly Haskell, for a first glance it looks quite haskelly, has static types and several other nice features, as well as decent performance and some Yesod integration. Perhaps it’s worth checking in one of the next posts?

Stay tuned!

Constructing the pipeline

September 22, 2016 DevOps, Gitlab, Haskell, Ubuntu No comments

If you successfully added a Gitlab project and pushed our Yesod code there, you might notice that some builds are being executed on your runner. That’s because the project already contains a .gitlab-ci.yml file. As you can see, it’s pretty much empty – just some prints for sake of checking whether the runner is configured properly.
Since it is, now is the time to adjust our pipeline to a more complex scenario. Obviously, there are dozens of pipelines used for many cases. Here I want to present one, quite simple deployment routine. We won’t be using all the steps yet (since we only have unit tests now), but they will come later during development (if not on this blog, then during your own coding sessions).

I propose a four-step pipeline, in .gitlab-ci.yml coded as:

stages:
  - dev-testing
  - packaging
  - integration-testing
  - publishing

dev-testing is the part executed by developers on their local machines – this usually boils down to some linter, compilation and unit test execution. I treat “unit tests” as tests which do not require any particular binary file available on build server (for example a separate database instance or some set of services). For these reasons, tests that use only sqlite are fine for me in this phase. Of course, feel free to disagree, I’m not going to argue about it. This phase goes first (before the “official” build phase), because – for compiled languages like Haskell – a separate build is required for test cases, and that’s kind of a commit sanity check. This stage should only fail if the developer didn’t run proper scripts before commit (ideally never), or if he’s not required to (e.g. for a really small project).

Next stage, packaging, is a phase that should never fail. It consists of building a whole deployment package, resolving dependencies, constructing RPM, DEB, Docker image or whatever deployment system do you use and pushing it to test-package repository (not necessarily – it may be sometimes passed as an artifact between builds).

Third stage, integration-testing is arguably the most important piece of the whole pipeline. It is needed to verify whether all the pieces fit together. They require full environment set up, including databases, servers, security rules, routing rules etc. I’m a big fan of performing this phase automatically, but many real-world projects require manual attention. If you have such a project, the best advice I can give you is – run whatever is reasonable here, and publish internally if it passes. Then handle the passed scenarios to your testers and add another layer of testing-publishing (possibly using a tool dedicated for release management). This stage will fail often – mostly due to bugs in either your code or your scripts (which are also your code) – there will be races, data overrides and environment misalignments. Be prepared. Still, it’s the purpose of this stage – things that will fail here, most probably would fail on production otherwise, so it’s still good!

The last stage, publishing is simple and should never fail – it should simply connect to release repository and put the new package there. It might be an input point for your Ops people to take it and deploy, it might be an input point for the testers. This stage should be executed only for your release branches (not ones hidden in developer repositories) and is the end of the automated road – next step has to be initated by a human being, be it deployment to production or further testing. This job should also make a proper version tag on the repository (this may be done in packaging as well, but I prefer to have less versions).

Of course, all stages may additionally fail for a number of reasons – invalid server configuration, network outage, out of memory exception, misconfiguration etc. I didn’t mention them earlier, because they aren’t really related to (most of) the code you create and will occur pretty much at random. However, remember my warning: while they might seem random, you should investigate them the first time you encounter any of them. Later on they will only become more and more annoying, and in the end you’ll either spend your most-important-time-just-before-release-oh-my to solve them or ignore the testing stage (which is bad).

A few more words about the choice of tooling: I tend to agree that Gitlab CI might not be the best Continous Deployment platform ever, especially due to limited release management capabilities and tight connection to automated-everything (I like it, but most projects require some manual testing). Perhaps a choice of Jenkins or Electric Flow would be better, but would require significantly more attention – first of all, installing and configuring a separate service and second – managing integration. Configuring Gitlab CI only takes a few lines of YAML, but for Jenkins it’s not that easy anymore!

Now, after we’ve managed to design the pipeline, let us create an example jobs for it.

dev-testing is easy – it should simply run stack setup && stack test (we have to linters for now).
preparing-package is a little trickier:

preparing-package:
  stage: packaging
  script:
    - stack setup
    - stack install --local-bin-path build
  artifacts:
    paths:
      - build/
    expire_in: 1 hr
  cache:
    - .stack-work

first, we need to install the package to build directory (otherwise it would remain in a hash-based location or be installed to local system – which is not what we want), then define the artifacts (whole build directory) and it’s expiration (1 hour – should be enough for us). The cache option is useful to speed up compilations – workspace is not fully cleared between builds. Note that this might be dangerous, if your tools don’t deal well with such “leftovers”. However, clean installation of GHC and all packages takes about a year, so caching is required (of course, you may also set up your own package server with a cache for the used ones, if your company is a tad bigger).
Rest of the stages is just printing for now – we have no integration tests, and installing Apt repository or Hackage server seems to be a bit of an overkill right now. I also hate polluting the public space (public Hackage) with dozens of packages, so I won’t do nothing right now there (I might reconsider later on, of course!).

If you download the code from GitHub, you will see that it doesn’t work in Gitlab. Apparently, stack is not installed in our Runner container! This requires quite a few commands, but luckily, they are listed in Stack GitHub installation manual.

For Ubuntu Server 16.04 this goes as following:

# add repository key
sudo apt-key adv --keyserver hkp://keyserver.ubuntu.com:80 --recv-keys 575159689BEFB442
# add repository
echo 'deb http://download.fpcomplete.com/ubuntu xenial main'|sudo tee /etc/apt/sources.list.d/fpco.list
# update index and install stack
sudo apt-get update && sudo apt-get install stack -y

Manual configuration management and tool installation is not the best practice ever, but it’s often good enough, as long as the project is relatively small (or you have dedicated people for managing your servers). We might consider changing this to some configuration management tool later on, when dependencies get more complex.

Aaand, that’s it! First pipeline builds should already successfully leave the Gitlab area. Congratulations!

Next post, promised a long time ago – GHCJS instead of jQuery in our app – comes soon.

Stay tuned!

Preparing the deployment system

September 17, 2016 Containers, DevOps, Gitlab, Linux, LXC, Tools, Ubuntu, Virtualization 4 comments

To deploy Yesod applications we’ll need quite a lot of infrastructure – first, a machine that performs the build, it’s configuration and tooling, and second – a machine that acts as a production server. That’s the minimum – we may also need special machines for test runners, client emulation, performance tests etc. – but let’s ignore it for now and focus on the two machines – build server and production server.
To make our life simpler, we’ll be using virtualization instead of physical servers – it’s cheaper and easier to maintain, plus rollbacks are much easier (machine snapshot before risky change is sufficient). I use Windows 10 at my host, so I’ll be using VirtualBox as the virtualization tool, but you may as well use KVM or Xen or even VMWare if you happen to have a license (free version doesn’t provide snapshot feature).
At first I simply set up a virtual machine with Ubuntu Server 16.04. I chose Ubuntu, because it’s the only distribution I know of which has LXC/LXD (Linux Containers – kinda like Docker, but more flexible) provided by the repository management. Since we’ll be using Linux Containers a lot, being sure that they work correctly is a must.
Ubuntu installer is really nice, so it’ll lead you step-by-step through the installation (I’m not sure if Ubuntu has kickstart/preseed installation accessible as simply as in RedHat/CentOS, but we’re only going to do it once, so we can live with it taking a bit longer and requiring our attention – at least for now). Remember to install OpenSSH server and Virtual Machine host (KVM). We won’t need DNS for now, we’ll just stick with with mDNS (broadcasted domain names – for internal networks, implemented by Bonjour or Avahi).
You might wonder why do we do this manually instead of automated installation via Packer and management via Vagrant – the short answer is – that’s simpler. It’s generally simpler to perform some task manually than to automate it, since you have to handle error cases – and in case of automation, you have none. Plus, you need to know the interface, which (arguably) – in tools such as most OS installers – is created for humans (unless you’re deploying RHEL/CentOS – they have a great kickstart installation). I also like to learn how do things work before automating them – it really simplifies the automation development later on. Unfortunately, this also means that you won’t be able to simply download a script from the repository and run it (of course, CI scripts will be provided, but deployment ones – not yet).
Anyway, after installing the OS, next step is to set up LXC containers. To do that, we’ll simply follow the instructions from Linux Containers webpage to install LXC and configure unprivileged containers (it’s best done for a user without admin rights – otherwise it doesn’t have much sense, except for excercise). Default container images are downloaded from “somewhere on the Internet”, but this can be changed by setting MIRROR variable (details are available here). I used the original mirror (ubuntu, xenial, amd64 – watch out, you might not be able to install Gitlab on i386 release and new Intel i7 processors – i686), and recommend it for start. Remember to set up LXD configuration (via lxd init). After installing the container you won’t be able to log in via ssh or lxc start, since the server is not installed by default and there are no users – you have to use lxc-attach to create first user, and you’re ready to go!
Today we’re going to manually set up two containers – one with Gitlab Server, and second with Gitlab Runner (CI), together with the infrastructure needed by our application. Do not be afraid – all these operations can be automated using Puppet, Chef, Ansible or Salt, but today we’ll perform them manually – to get to know the problem better. Later on, when our infrastructure gets bigger (including Nexus repository and multiple runners) we’ll start provisioning the machines using one of these tools.
For now – just clone this container twice (lxc-copy --name [old-name] --newname [new-name]) – after that we’ll have three containers on our VM – one for Gitlab Server, one for Gitlab Runner, and one as a template for future containers (remember to install openssh!).
Installing Gitlab Server is really simple – just follow the instructions available on Gitlab webpage. Be careful – if you install 32-bit version of container, you might not be able to use the repository maintained by the Gitlab team, and the one maintained by Ubuntu didn’t work for me. On a x86_64 container it was a piece of cake, so I won’t dive into details, as now we’ve got a problem to solve – how to expose Gitlab Server, installed in container (automatically served via container’s port 80) to out host system (VM host, not container host)? Unfortunately, it’s less obvious than I’d expect.
First of all, you need to have network access from VM host to VM guest – in case of VirtualBox I use Host-Only network. It needs to be configured on startup – you can request that by adding:

auto [interface-name-eg-enp0s8]
iface [interface-name-eg-enp0s8] inet dhcp

Of course, you can also use static IP assignment if you prefer to.
Then you need to install iptables-persistent package – it will provide a possibility to save our new forwarding rules. Next, you need to set IP address for your containers. To do this, first open /etc/default/lxc-net and uncomment line: LXC_DHCP_CONFILE=/etc/lxc/dnsmasq.conf.
If you wish to, you can also modify internal network addresses – I prefer the 192.168.x.y network, so I changed them. Next, open /etc/lxc/dnsmasq.conf and add line: dhcp-host=[container-name],[ip-address]. This will cause internal LXC’s DHCP server to assign static IPs to these containers. So far so good, now it’s time for port forwarding. Luckily, the magical command is available on LXC page: iptables -t nat -A PREROUTING -p tcp -i [external-connection-name-eg-enp0s3] --dport [host-port] -j DNAT --to-destination [container-ip]:[container-port]. After that you might have to open the ports (we want to be able to access Gitlab Server from outside the virtual machine, depending on whether firewall is enabled). Persist the new rules (sudo netfilter-persistent save), and voila, Gitlab Server should be already accessible!

One more thing left – we don’t want to start the container after each VM restart, so we need to automatically start it. To do this, we need to add a flag to out container config:

lxc.start.auto = 1

you can add also to add lxc.group = onboot (it should just start up earlier), but it didn’t work for me. Possibly because I’m using unprivileged containers, and this means they aren’t cleanly ran on startup. Root containers use a nice approach of automatically running lxc-autostart on boot – but this is executed with root privileges. For unprivileged containers we have to additionally help ourselves with nasty tricks like the one proposed in ServerFault answer – by adding @reboot lxc-autostart to crontab.
Well, as long as it works it’s fine enough – at least unless we get some nicer solution (like an additional service, which we don’t want to write now, so we’ll use cron).
Now let’s log in to Gitlab (root is the default user name, you choose the password) and create some project. Then go to Admin Area (a tool in top right corner of the screen), then Overview tab and Runners subtab. As you can see, we currently have no runners (quite understandable, we didn’t add any). Now is the time to add one.
Remember our second container? Log to it now. Then run the bad command (curl -L https://packages.gitlab.com/install/repositories/runner/gitlab-ci-multi-runner/script.deb.sh | sudo bash – it’s bad because it gives root access to your machine to an unknown script. Good news is that we’re running that in an unprivileged container, so if something bad happens it should be simple to fix it). We can avoid the script by manually extracting repository address from the script and manually adding it to our apt-repositories.
Next steps are similar to ones done for the server – sudo apt install gitlab-ci-multi-runner, adding container to autostart. One more special step left for the runner is registering it to Gitlab Server. This can be done by running sudo gitlab-ci-multi-runner register. To perform registration you need network address of the server (either hostname or IP, with /ci appended, e.g. http://10.0.0.1/ci), registration token (available on the Gitlab page we’ve opened before installation). Additionally you have to choose a name for the runner (doesn’t matter), tags (don’t really matter for now – may be used to indicate machine power, architecture or something) and runner type – that’s quite important – I’ve chosen shell runner (which is pretty unsafe, but again – that’s why it’s in a container).
If you reload the Gitlab webpage, you should see that a runner was registered.

Congratulations! You’ve managed to set up a Continous Integration/Deployment environment and a Version Control Repository today! All in nice, separated virtual machines. How cool is that?

In the next post we’re going to provision our environment with the necessary tools – GHC, Stack – and run first automated, on-push compilations for our repository.

Stay tuned!

Testing the app

September 8, 2016 Haskell, Languages, Testing, Yesod No comments

Every application with a reasonable complexity obviously has to be tested before releasing to the customer – Yesod applications are no different in this case. Of course, type safety gives us a strong guarantees – guarantees which are far beyond reach of Django, Rails or Node developers. Still, that’s not enough to purge all the bugs (or at least most of them). That’s why we have to write tests.

There are hundreds of levels of testing that are specified by dozens of documents, standards and specifications – we won’t be discussing most of them here, as there are only a few people that care about technicalities, and most of us care only about “what to do to make the app work and guarantee that it will keep working”. Formally, these are tests of the functionality (there are also other groups, like performance or UX), and are the most common group of tests. For our application, I’d divide it into three or four kinds of tests that require different setups and different testing methods (we won’t be implementing all of them here – that’s just a conceptual discussion):

  • Unit tests
  • Server (backend) tests
  • Interface (frontend) tests
  • System tests

They become needed with the increasing complexity, so you won’t necessarily need all of them in your first project, but with time, they become increasingly useful (so do performance tests and UX ones, but they are a different story). Unit tests are one of the most popular tools, and pretty much everybody claim that they use it – it’s the basic tool for assessing application sanity in unityped languages (like Python or JavaScript). Of course, Haskell also has several frameworks for writing them, of which ones of the most popular are Tasty and Hspec. Tasty is a framework, but the actual tests and assertions are provided by other libraries, like QuickCheck, SmallCheck or HUnit. The third one is quite a typical xUnit library for Haskell, but the first two are a bit different – instead of testing specific input/output combinations, they are testing properties of your code. They do this by injecting pseudo-random values and analyzing features of the result (instead of the result value). For example, we if the define that prepend is a function which – after applying it to a List – increases its length by 1 and is the first element of the new structure, we could define it as:


prepend :: a -> [a] -> [a]
prepend elem list = elem : list

x = testGroup "prepend features" 
  [
    QC.testProperty "after prepend list is bigger" $ 
      \list elem -> (length $ prepend elem (list :: [Int])) == length list + 1,
    QC.testProperty "after prepend becomes first element" $ 
      \list elem -> (head $ prepend elem (list :: [Int])) == elem
  ]

of course, there are lots of different properties that can be assessed on most structures. However, there is a catch in these tests – you cannot ignore intermittent failures. Since they use random input (pseudo-random, but the seed is usually really random), every test failure may be an actual bug. Plus, it’s possible that some bugs will go unnoticed for a few runs. That’s a bit different philosophy from the usual approach, where data is always the same and bug is either detected or not on each run (assuming no environment “intermittent” failures). It’s not better or worse, it’s simply different. Arguably better for lower-level tests, such as unit tests, and that’s why it’s used there. They are not suitable for edge-case testing, but are very good at exploring the domain.

There is nothing special in unit tests for applications that are using Yesod – they are just plain Haskell UTs, ignoring the whole Yesod thing.

Next group of tests are server tests – ones that use backend. They should be ran against a fairly-complete app, with backend database set up (preferably on the same host), but without connection to external services or full deployment (proxies, load balancers etc.). It should mostly test reactions on API calls – in most cases you will not want to test the HTML page but rather some JSON or XML responses (testing HTML is much harder). Yesod provides such tests (called integeration tests there, but this name is often used in many contexts) in a helper library, yesod-test. Examples of such tests together with Hspec framework are included in the default scaffolding in test/Handler directory. As you can see, these are quite like HTTP requests, except that they don’t really go through any port, and the communication happens inside a single binary. I really recommend writing this kind of tests – they give a (nearly) end-to-end view on the processing, and are still quite efficient (single binary with occassional DB access). One more thing about the database: beware. While you’ll probably want to use it in these tests, you have to be sure which applications communicate directly with the database. I don’t mean “the same instance of database as your test database”, as I’m sure that you’ll have a special database for tests, preferably set up from scratch on each test run. What I mean is that it’s quite common that more than one application communicate with the same database – for example, for market analysis. That’s an important point, because if you use a shared database, database is also one of your interfaces and you should treat it with the same care as you treat your other interfaces.

There are two types of interface (UI) tests – first one is testing for view – as in Jest, a library for testing React views – and the second type is testing for functionality – as in Selenium. I’m focusing on tools for web projects here, because Yesod is a web framework, but same types are important in pretty much any area, including mobile and desktop applications. These types of tests are probably responsible for the most hatred from the development teams to testing of all tools. That is because both these types are brittle, and small, seemingly not connected changes can break them. These changes include moving the button a few pixels left or right, changing the tone of the background, removing one or two nested divs. Of course, properly written tests will yield more reasonable results (if you’re using Selenium, check out the Page Object pattern), but still, they’re much less change-tolerant than the other types. Additionally, they are not yet fully mature – despite the fact that Selenium is there for a few years already, the driver for Firefox is still not ready (I know that it was broken by Firefox 48 and that geckodriver is not responsibility of Selenium team – still, lack of driver for one of the top browsers signals immaturity of tooling as a whole), so you may encouter quite a few glitches. Nevertheless, I really recommend to implement tests for some basic functionalities of the app. In the beginning it might seem that manual clicking through the app is faster, but amount of manual clicking never ceases to increase, and our patience does – and the quality of testing suffers. Of course, I’m not asking you to implement every single detail in UI test – but at least check basic features – that checkboxes work, that submit buttons cause submits and that data is available in the UI after its submission. Oh, and there are Selenium bindings for Haskell. For web tests your setup should be similar to the one created for server tests, while for view tests it may be simpler – for example as simple as four UT setup.

The last type of tests is arguably the most complex one and most IT projects don’t have them. The are run in the actual production environment (with the exception for no serving of real clients yet) and/or its clone. Their purpose is to guarantee that deployment was done properly, communication with external services is fine and generally the application is ready to start serving real clients. This is no longer time for checking the functionality – this should have been done earlier – only a few basic scenarios are executed, mostly to guarantee that interfaces between system components (services, external world and things like OS) are working fine. Static typing help here as well – perhaps Yesod is not the best example, but Servant is kinda famous for generating type-safe WebAPIs. Still, we have to check that services were built in compatible versions, ports are not blocked etc. Altogether – this step is more of a job for a DevOps guy and simplifies rather operations than development, but hey – in your startup you’ll have to write everything on your own and deal with the administration as well, so you’ll better get to know it!

By the way, that’s precisely what we’re going to deal with in the next post – setting up an automated deployment routine to provide us a fully automated continous deployment pipeline. The whole task probably won’t fit in a single post, but hey – let’s see.

Stay tuned!

JSON API in Yesod

September 6, 2016 Web frameworks, Yesod No comments

We’ve managed to implement a full set of CRUD operations on our Project type. Still, our app is definitely not perfect – for example, in the previous post we’ve implemented DELETE handler as Html Handler. That’s not exactly fine, since browsers generally use only GET and POST – other methods (such as DELETE) are used either by web services or by AJAX calls. We used it as AJAX call, and that’s why we can’t exactly use typical redirection, and instead we redirected the user by hardcoded link. This lead to races and projects which were just deleted were sometimes visible. In this post we’ll deal with this problem.

We’re going to remove the hardcoded link and instead redirect user to link returned from API call, after results are returned. To do this, we need to make several changes:

  1. add a Handler for JSON DELETE requests
  2. remove redirection from button
  3. add redirection to AJAX handler
  4. change request to accept application/json header

First one is critical for us, so let’s start with it. We’ll provide only DELETE method – this way we’ll have something to implement when discussing high-level testing.
Right now we want to provide only JSON responses. Since Yesod uses Aeson library under the hood for JSON handling, its types will also appear in the handler (Aeson uses Value type to denote JSON). New implementation of deleteProjectR looks like this:

deleteProjectR :: ProjectId -> Handler TypedContent
deleteProjectR projectId = selectRep $ do
    provideRep $ do
      _ <- runDB $ delete projectId
      renderUrl <- getUrlRender
      returnJson $ object ["target" .= (renderUrl ProjectsR)]

There are quite a few new elements in here! First one is the signature – Handler TypedContent. TypedContent means that Yesod will check headers and verify whether it can provide a representation required by the client (by default it’s not checked). For example, if client requests application/xml, but we can only provide application/json, a 406 (Not Acceptable) error will be returned. This data type check is done by selectRep function, basing on provideRep calls. There is a little trick here – actual delete is performed in provideRep call. That’s not perfect, as it has nothing to do with the representation, and we would prefer to execute the action always if we can provide the return type. For now it’s not a big deal since we have just one representation, but this can lead to problems (code duplication mostly) later on, when we’ll have many representations. Creating the JSON is done in a bit simplistic way (to make it totally clean we should define a data type, a ToJson instance etc.), but doing it in more sophisticated way seems like an overkill.

Anyway, we’ve done our job on the server, and everything still works – that’s because jQuery AJAX calls accept all possible responses (Accept: */* header). We want to change it, since we’ll be only able to handle JSON responses – thus our templates/project.julius will change to:

jQuery("#delete-project").on("click", function() {
  jQuery.ajax("@{ProjectR projectId}", {
    method: "DELETE",
    headers: { Accept: "application/json; charset=utf-8" }
  }).done(function(response) {
    window.location.replace(response.target);
  });
});

This will redirect you to proper page each time after deletion, thus there will be no race condition. Only thing left is to change the hyperlink to actual button:

<button href=@{ProjectsR} .btn .btn-danger #delete-project>Delete

in project.hamlet. And it’s done!

We’ve implemented our first JSON API. It might not be very impressive, but works well – at least better than the previous solution, which had a built-in race condition.
That’s it for today! Since we’ve already implemented quite a lot of stuff, we would like to be sure that it keeps working after each change. This is what the next post will focus on – testing environment preparation and testing the application. This will be just a short instruction of how to manually prepare a single environment for testing – automatically generating and deploying such environments is a much harder topic and we won’t be focusing on it now. Perhaps later, who knows? Maybe we’ll even have a series on environment automation!

Stay tuned!

Finishing the CRUD

September 5, 2016 Databases, sqlite, Web frameworks, Yesod No comments

Today we’re going to implement the last of CRUD (Create-Read-Update-Delete) operations on our projects. We already know most of the process, so let’s dive into the code!

First, add DELETE method to /project/#Int route (it’ll look like this: /project/#Int ProjectR GET DELETE. Then, second step – handler. In Handler/Project.hs let’s implement:

deleteProjectR :: Int -> Handler Html
deleteProjectR projectId = do
  _ <- runDB $ deleteBy $ UID projectId
  redirect ProjectsR

and handling is done! Now, how about adding a trigger? In templates/project.hamlet we’ll add two buttons – one for project edition, and one for deletion.
That’s also fairly easy, just add:

  <div>
    <a href=@{ProjectEditR projectId} role="button" .btn .btn-primary>Edit
    <a href=@{ProjectsR} role="button" .btn .btn-danger #delete-project-#{projectId}>Delete

to the end of file. Now, while routing to edition works straight away, that’s not true in case of delete – we need to add a custom JS handler to it. In Yesod this is typically done by using Julius template language – which is simply JavaScript with some variable interpolation. Luckily for us, we don’t have to use pure JavaScript – Yesod in the scaffolding embeds jQuery on the page. While we could live without it, I doubt that many pages will be implemented without this library, so we’ll use it. Of course, it’s not as rich as React or Angular when it comes to user interfaces, but let’s face it – we just need a simple hook here. And it’s sufficient to write:

jQuery("#delete-project-#{rawJS $ show projectId}").on("click", function() {
  jQuery.ajax("@{ProjectR projectId}", { method: "DELETE" });
})

This is just your plain old JavaScript, with one detail – variable interpolation (#{rawJS $ show projectId}). As you can see, it’s a bit different than in Hamlet – the rawJS call is quite important here. If we didn’t use it, the default interpolation would come in – and it uses JSON encoding, so it wouldn’t work (in this exact setup – it’s of course possible to make it work with JSON!).

Note a trick we’ve done here – to keep every Handler a Handler Html, we’ve implemented redirection separately from deletion (deletion is sent on button click, while redirection happens on link). Note that this means bad design and code duplication – we did it purely to avoid API calls requiring JSON/XML responses. We’ll deal with them soon, but for now – let’s avoid them. There is one more problem with this code – it might occassionally display elements that were just deleted in projects list – this is because requests are not ordered, so redirection may be handled before deletion. In extreme case, deletion might not be sent properly. We’ll deal with all these problems in the next post, about API – for now, we can live with it.

For some reason integration with Julius is not as smooth as with Hamlet – several times stack didn’t catch my changes and I had to rebuild manually (reset stack command).

There are two more simple UI changes I want to make: add “back” button to edit page and “add new” button to main projects list.
Both these changes seem quite straightforward:

<a role="button" href=@{ProjectR projectId} .btn .btn-default>Back to view

to templates/project-edition.hamlet and

<a role="button" href=@{ProjectEditR newProjectId} .btn .btn-default>Add new project

to templates/projects.hamlet. But there’s a little problem here – we need to generate newProjectId. And it should be as unique as possible. Before we approach this problem, let’s understand why do we actually face it, and how to avoid such problems in the future.

UID field of Project is an artificial field, that doesn’t actually resemble any domain entity. It serves only for our internal purposes, and – in this sense – is a clone of ProjectId provided automatically by Yesod. We don’t have such trouble with ProjectId not only because we don’t use it – if we did, it would be automatically assigned to a unique identifier. So, to simplify our mental model, we actually shoot ourselves in the foot. Oh well, good that it’s such a simple application, it would be much worse if the app was a real server. We’ll fix this issue straight away, since it’ll be much easier that generating a unique identifier.

This adjustment requires quite some changes!

  • config/projectModels – removal of identifier field and UID uniqueness constraint
  • config/routes#Int to #ProjectId in route signatures
  • Handler/Project.hs – signatures, invocation of renderPage (projectId is totally internal now, so shouldn’t be displayed – but is still needed for routes), getBy404 to get404, deleteBy to delete and page selection – this one might be tricky, so here’s my solution:
    selectPage :: ProjectId -> Widget
    selectPage projectId = do
      project <- handlerToWidget $ runDB $ get404 projectId
      renderPage projectId project
    
  • Handler/Projects.hs – signatures and removal of mapping on fetched projects
  • Handler/ProjectEdit.hs – form (no identifier anymore!), signatures. And upserting – as you can see, upserting doesn’t take ProjectId as argument, therefore – by default – would simply add each project as a new one. To prevent this, we’ll have to implement two routes with two actions – insert for new ones and update for existing ones (using solution proposed on StackOverflow)
  • Database/Projects.hs – we’re actually going to remove this file altogether – we can insert projects via web interface, so for now we do not need hardcoded data. We’ll need it again when we get to testing, but it won’t be until next week, so we can wait. Removing this file will also cause us to remove the insertion hack from Application.hs.

Remember abour a runtime change – since we removed a field, automatic migration is not possible, so we need to perform it manually. The easiest way is to simply wipe out all the data and insert it later on – for now it’s good enough. It won’t be good enough when we get to testing, but we still have some time for fun before that happens.

These are mostly simple changes, but remember – if you have any trouble with implementing it, you can check out working code from GitHub. We’re focus on Handler/ProjectEdit.hs, since it changes quite vastly., and some of changes are not obvious.

First of all, we export four routes now: postProjectEditIdR, getProjectEditIdR, postProjectEditNoIdR, getProjectEditNoIdR. They are just a thin wrapper over postProjectEditR, which has a new signature, Maybe ProjectId -> Handler Html. Next change is in widget files – since we have one page, and two possible sources (new project/project edition), we need to support this in routes. To make it easier, we’ll introduce intermediate variables, defined as follows:

backRoute = maybe ProjectsR ProjectR projectId
postRoute = maybe ProjectEditNoIdR ProjectEditIdR projectId

Database fetch becomes quite tricky as well – the following form works:

postProjectEditR :: Maybe ProjectId -> Handler Html
postProjectEditR projectId = do
  project <- (runDB . get) `mapM` projectId
  renderForm projectId $ join project

`join` is used to flatten `Maybe`s (we have a `Maybe (Maybe Project)`, since we can have no id – first maybe – or database may contain no project – hence second).
Remember about modifying upsert call! Now we either update or insert, which boils down to:

updateCall = maybe insert (\id val -> repsert id val >> return id) projectId

this lambda expression doesn’t look nice, but repsert (replace, insert if doesn’t exist) doesn’t return anything by default, and we need the id here.

That’s it for today! We did a lot of good job – cleaned up several hacks, adjusted type signatures, added possibility to delete entities. The app is pretty much complete when it comes to basic functionality. In the next post we’ll clean up today’s DELETE implementation with the use of AJAX calls and HTTP API.

Stay tuned!

Get the data in

September 4, 2016 Databases, Web frameworks, Yesod No comments

We managed to successfully generate form and redirect the user, but that’s still not enough – the data on the server is still static, as POSTs do not cause database to be updated.

Well, today we’re going to deal with this problem. Since we’re using same logic for both adding and updating new items, we’re gonna use upsert function. It’s a function that does exactly what’s necessary – if matching item exists, updates it, if it doesn’t exist – creates it. It fails (throws an exception) if there uniqueness constraints are broken, but luckily – that won’t be the case, as we have only a single unique constraint (UID). Luckily for us, Persistent offers upsert function. This function takes two arguments – first is a PersistEntity(Project), second is the list of updates to perform if record already exists. Default value (for empty list) is to replace the record, which is perfectly fine for us.

The code is simple – in Handler/ProjectEdit.hs change handling of FormSuccess to

FormSuccess project -> do
   (Entity upsertedId upsertedProject) <- runDB $ upsert project []
   redirect $ ProjectR (projectIdentifier upsertedProject)

and that’s it – field will be updated and you’ll be redirected to the view of newly added project. One little trick here is that we use projectIdentifier upsertedProject instead of projectId – this is done on purpose, and is there to handle weird cases properly. As you remember, projectId is taken from route, and projectIdentifier is set in a hidden field. A user with some development knowledge could change the value of the field and submission would go to different id than the actual UID of the project. I admit that it’s not the best design ever, but I also believe it’s good enough for us.

Long story short – we did it! Modifications are now stored to database and user is redirected to page of inserted project. Which is kinda poor right now, as the only information displayed is short name of the project. Let’s modify it and change templates/project.hamlet to:

<div>
  <h1 .page-header> Project ID: #{show projectId}
  <p> Name: #{name}
  <p> Short name: #{shortName}
  <p> Deadline: #{show $ utctDay deadline}

To make it work we also need to add proper definitions to Handler/Project.hs, but it should be a piece of cake for you! If you want some hints, remember that working code for the service we’re writing is available at GitHub.

For full life cycle of projects we still need one more operation – delete. Also, an easier way of adding new projects and editing existing ones would be kinda nice. We’ll solve these problems with buttons, Julius and jQuery next week. Even though it will totally work and be nice and stuff, Julius isn’t really what we want for writing our frontend logic – it’s still your usual JavaScript, just with some variable interpolation. After we deal with the rest of the machinery for our service (and there are not that many left – session, logging, JSON responses, deployment), we’ll refactor Julius frontend into a frontend based on GHCJS.

Next post – deletes and some cleanup – coming next week.

Stay tuned!

Generate the form

September 3, 2016 Web frameworks, Yesod No comments

Sorry for the delay of this page – last week was quite a hard time and this post took much longer than I thought. To make up for it, this week will get two posts – first today and second one tomorrow.

As you probably remember, last time we managed to set up database connection and fetch projects from it. Still, we didn’t have an option to insert data – all our data was preset from a hardcoded file. Well, good news – today we’re going to remove this obstacle.

Right now we’re going to do: add a new route – project/[id]/edit for project data edition – we’ll create a formular and a handling routine.

First of all, let’s start with the route. That’s pretty straightforward and you probably know the right syntax:

/project/#Int/edit ProjectEditR GET POST

in config/routes. A new thing here is that we handle POST as well as GET (no DELETE yet though!), but it doesn’t make a big difference. We just have to provide one more function – postProjectEditR in our handler. So let’s do it now – create a Handler/ProjectEdit.hs and include it in Application.hs and .cabal file.

Now we’re going to write the handler – that’s the core of our today’s concept, so let’s start with a little introduction.

There are three types of forms in Yesod – applicative forms, monadic forms and input forms. Applicative ones are the most common, and ones that we’re going to deal with today. Monadic ones are used in case of non-standard view, and input ones are not generated, but only received (e.g. because of dynamic fields). Applicative forms are generally created by composing preconfigured fields. Let’s take a look on a more concrete example – our Project with four fields – identifier, name, shortName and deadline. While changing identifier might seem a bit weird (it’s in URL after all!), we’re not going to care about it – it’s the forms that are interesting today after all, not the semantics of project addition, right?

We’ll start with creating the form. Type signature will be a little weird – Int -> Maybe Project -> AForm Handler Project. That’s a little unintuitive, but our route allows for accessing UIDs without projects, and if the project already exists, we want to fill in rest of the fields.

We’re gonna start with the easy part – the Hamlet template for the form. W’ve already created quite a bunch of Hamlet templates, so it shouldn’t be a big deal. Here’s the code for templates/project-edition.hamlet (by the way, if you’re wondering how to change templates directory in Yesod – you can’t, it’s hardcoded):


<div .jumbotron>

<h2> Project edition

<form method=post action=@{ProjectEditR projectId} enctype=#{enctype}>
    ^{formWidget}

pretty simple, eh? We just add a master block, a heading and form declaration (with encoding and target route – be careful with the indentation – it’s important that formWidget is under form) and that’s all – well, maybe except the formWidget, which we still have to create. We should do it now, but, unfortunately, it’s not that simple. Yesod uses runFormPost function to both generate a form and evaluate input data against it. It has quite an impressive type signature, but the important thing is the result – it’s m ((FormResult a, xml), Enctype). FormResult a is the parsed result, information that no data was provided or a list of error messages, while xml is the formWidget we’re looking for. Enctype is not really interesting – it is the encoding required by form (UrlEncoded or Multipart). It’s argument is a function, that tranforms Markup(roughly Html) to MForm (a formular).

Before diving into code, let’s recap what we’re going to do:

  1. create a POST/GET handler (one for both methods)
  2. fetch project from database, if it already exists
  3. generate form with prefilled fields (in case of existing project)
  4. after submission, save the form to database

Quite a lot of work for a single post, right? We’d better get started straight ahead!
Let’s start with the POST handler:

postProjectEditR :: Int -> Handler Html
postProjectEditR projectId =  do
  project <- runDB $ getBy $ UID projectId renderForm $ (\(Entity _ proj) -> proj) `fmap` project

That’s fairly straightforward and similar to what we’ve done in Handler/Project.hs. Perhaps the only interesting part is using the fact that Maybe is a Functor, so we can use fmap on it (it’ll be executed on Just value, Nothing will be ignored). Well, since the first part is already done, let’s go to the rendering part!

renderForm :: Maybe Project -> Handler Html
renderForm projectId projectEntry = do
  ((result, formWidget), enctype) <- runFormPost (projectForm projectId projectEntry) case result of FormSuccess project -> redirect $ ProjectR (projectIdentifier project)
    _ -> defaultLayout $ do
      app <- getYesod
      setTitle $ (toHtml $ (appName app)) ++ ": editing projects"
      $(widgetFile "project-edition")
      $(widgetFile "back-to-projects")

Now, this piece of code is much more interesting – first of all, it uses the actual form, generated by projectForm projectId projectEntry. We didn’t write this function yet, so it’s more of a stub for the future. Next interesting part is handling of successfully created Project – here we simply redirect to view page. This means that we don’t actually insert anything to database – we’re gonna deal with the inserting later on. And the rest is displaying the form (we’ve created the Hamlet template in the beginning).

Now, before we get to the actual form, there is one more level of indirection that I’d like to introduce:

projectForm :: Int -> Maybe Project -> Html -> MForm Handler (FormResult Project, Widget)
projectForm projectId project = renderBootstrap3 BootstrapBasicForm $ projectAForm projectId project

I want to introduce it, since it’s quite independent from the form creation, but is the point at which the layout is decided (the form’s form). There are a few interesting things here – first of all, it returns a MForm (monadic form version), and creates it from applicative version of form (AForm). Next, it performs the form conversion to Bootstrap mechanisms – namely BootstrapBasicForm. There are three types of Bootstrap form layouts, and this one is simplest – label above input field. Looks good enough for our purposes, so let’s finally get to creating forms!

Here’s the code:

projectAForm :: Int -> Maybe Project -> AForm Handler Project
projectAForm projectId project = Project
    <$> areq hiddenField identifierConfig (pure projectId)
    <*> areq textField nameConfig (projectName <$> project)
    <*> areq textField shortNameConfig (projectShortName <$> project)
    <*> areq utcDayField timeFieldConfig (projectDeadline <$> project)
    <* bootstrapSubmit ("Modify" :: BootstrapSubmit Text)
    where
      identifierConfig = bfs ("" :: Text)
      nameConfig = withPlaceholder "Project name" $ bfs ("Project name" :: Text)
      shortNameConfig = withPlaceholder "Short project name" $ bfs ("Short project name" :: Text)
      timeFieldConfig = bfs ("Project deadline" :: Text)

Lots of code, eh? But it’s mostly simple, so it shouldn’t be a big deal. First we declare that we’re gonna create a Project, and then list it’s fields. In our case all the fields are required (areq – optional are added by using aopt). Fields are added by using <*> operator. First areq argument is type of field (e.g., textField), second – field configuration (in our case labels, but we can also set tooltips or ids) and third – default value. We take most default values out of database project, except identifier – we know it for sure, so it’s taken from URL (as an argument). bfs call in all settings is used to underline the fact that we use Bootstrap styles, and it’s argument will be used as a label for field. There is one little quirk here – currently Bootstrap forms in Yesod don’t handle hidden fields very well, and leave some space for it (namely, space for the label). This might be fixed when a GitHub issue is resolved, but it’s quite old already, and it doesn’t work yet. A possible workaround is to add this field to form after its creation, but it’s not a nice solution, and we’re just gonna live with the current state for now.

We’re almost done – there is just one thing left to display the form. Namely – utcDayField does not exist in Yesod. But fret not! Yesod has a built-in Day field, which is used for dates. We just need to transform it so that using the same field will yield a different data type as output. Luckily, Yesod provides a function to transform one field to another, and it has (quite unintuitive) name checkMMap. Our defintion of utcDayField looks like this:

utcDayField = checkMMap (return . Right . toUTCTime :: (Day -> Handler (Either Text UTCTime))) utctDay dayField
  where
  toUTCTime x = UTCTime x (secondsToDiffTime 0)

Looks a bit magical, I agree. Basically we need to provide two functions: one to transform Day to UTCTime (to inject data to our code), and second, to transform UTCTime to Day (to display default value properly). This is done be toUTCTime (defined) and utctDay (built-in). Rest of the definition (return . Right . and type signature) serves mostly to feed the type system.

And we’re done! The form is generated and user can display it easily. A nice bonus is that Yesod-generated forms automatically provide tokens that protect us from CSRF attacks.

Remember to add also handler for GET requests (getProjectEditR = postProjectEditR is a sufficient implementation) and required imports: Yesod.Form.Bootstrap3 and Data.Time.Clock).

That’s the end of today’s post. We did a lot of things – accessed database to fetch a – possibly existing – project, generated a form and filled it with default values. We didn’t manage to insert the data to database yet – we’re going to do this in the next post, tomorrow.

Stay tuned!