To Dream of Magick

Dreamer Shaper Seeker Maker

Abstracting the Monad Stack, Part 1

Posted on Thu Sep 21 04:00:00 UTC 2017

As a sidenote, my previous articles were all in terms of a fictitious image processing program. I am actually very interested in image processing, but that is not the code I have on hand, so for my examples, I am switching over to a health-tracking application that I've been working on for a while. I'll probably change the previous articles to reflect it. I am making this change, though, primarily because there are so many real examples for me to draw from.

Here is the application in question, and it is under active development. I have some layer refactoring to do, but the code is stable and I am mostly focusing my efforts on additional features.

Building a library monad

Code for this section

I previously talked about application monads, but today I will talk about a library monad.

Fortunately for everyone, they are almost identical to application monads, but with one twist that I want to lead you to.

In my health application, I have a library that forms the "application" and that library is meant to be wrapped around interface layers, such as the web API or a GUI. The application library is basically a set of functions that make a complete API that I can execute in the REPL.

So, assume a monad like this one:

data AppContext = App { weightSeries       :: TimeSeries Weight
                      , timeDistanceSeries :: TimeSeries TimeDistance
                      , stepSeries         :: TimeSeries Steps
                      }

-- TODO: handle time series exceptions. Make this less absurd.
data HealthException = TimeSeriesExc SeriesExc
                     | UnknownException String
                     deriving (Eq, Show)

newtype HealthM a = HealthM (ReaderT AppContext (ExceptT HealthException IO) a)
    deriving (Functor, Applicative, Monad, MonadIO, MonadError HealthException, MonadReader AppContext)

runHealthM :: AppContext -> HealthM a -> IO (Either HealthException a)
runHealthM ctx (HealthM act) = runExceptT (runReaderT act ctx)

(yes, that is a real TODO item in the code)

On it's own, this isn't bad. But the pain lies in that this is a library, and thus will likely end up in a different monad stack. As it is written, I would need to unroll this stack into IO and then re-roll it into my web stack. This is not horrible, but it is annoying. In the health application, I would do the re-rolling to glue these functions into my Web application monad, and it would look like this:

data Config = Config
data WebContext = WebContext { config :: Config, app :: AppContext }

newtype WebM a = WebM (ReaderT WebContext (ExceptT WebExc IO) a)
    deriving (Functor, Applicative, Monad, MonadIO, MonadError WebExc, MonadReader WebContext)

handleSaveTimeDistance :: Maybe SampleID -> SetTimeDistanceParameters -> WebM (Sample TimeDistance)
handleSaveTimeDistance sampleId params =
    let workoutFromParams = undefined
        workout = workoutFromParams params
    in do
    WebContext{..} <- ask
    res <- liftIO $ runHealthM app $ saveTimeDistance sampleId workout
    case res of
        Left err -> throwError $ AppExc err
        Right val -> return val

saveTimeDistance :: Maybe SampleID -> TimeDistance -> HealthM (Sample TimeDistance)

Again, this is not awful, but it is tedious. It can also become awful if I want to perform multiple operations from the library interleaved with operations from my webapp. For example, what if I want to query every series that I am storing?

handleGetHistory :: Interval UTCTime -> WebM ([Sample Weight], [Sample TimeDistance], [Sample Steps])
handleGetHistory interval = do
    WebContext{..} <- ask
    weightRes <- liftIO $ runHealthM app $ getWeights interval
    timeDistanceRes <- liftIO $ runHealthM app $ getTimeDistance interval
    stepRes <- liftIO $ runHealthM app $ getSteps interval

    case (weightRes, timeDistanceRes, stepRes) of
        (Left err, _, _) -> throwError $ AppExc err
        (_, Left err, _) -> throwError $ AppExc err
        (_, _, Left err) -> throwError $ AppExc err
        (Right weights, Right timeDistances, Right steps) -> pure (weights, timeDistances, steps)

getWeights :: Interval UTCTime -> HealthM [Sample Weight]
getTimeDistance :: Interval UTCTime -> HealthM [Sample TimeDistance]
getSteps :: Interval UTCTime -> HealthM [Sample Steps]

handleGetHistory already becomes tedious.

Rewrapping the context

Code for this section

The first, most obvious solution, is a helper function to re-wrap:

wrapEitherIO :: (exc -> WebExc) -> IO (Either exc a) -> WebM a
wrapEitherIO excTr act =
    liftIO act >>= either (throwError . excTr) pure

handleGetHistory :: Interval UTCTime -> WebM ([Sample Weight], [Sample TimeDistance], [Sample Steps])
handleGetHistory interval = do
    WebContext{..} <- ask
    weights <- wrapEitherIO AppExc $ runHealthM app $ getWeights interval
    timeDistances <- wrapEitherIO AppExc $ runHealthM app $ getTimeDistance interval
    steps <- wrapEitherIO AppExc $ runHealthM app $ getSteps interval
    pure (weights, timeDistances, steps)

And then, probably even one step further with a utility function to do the re-wrapping.

wrapEitherIO :: (exc -> WebExc) -> IO (Either exc a) -> WebM a
wrapEitherIO excTr act =
    liftIO act >>= either (throwError . excTr) pure

runHealthMInWebM :: (HealthException -> WebExc) -> AppContext -> HealthM a -> WebM a
runHealthMInWebM handler app = wrapEitherIO handler . runHealthM app

handleGetHistory :: Interval UTCTime -> WebM ([Sample Weight], [Sample TimeDistance], [Sample Steps])
handleGetHistory interval = do
    WebContext{..} <- ask
    weights <- runHealthMInWebM AppExc app $ getWeights interval
    timeDistances <- runHealthMInWebM AppExc app $ getTimeDistance interval
    steps <- runHealthMInWebM AppExc app $ getSteps interval
    pure (weights, timeDistances, steps)

This alone makes life much nicer. All of the exception checking boilerplate gets encapsulated into wrapEitherIO, and so every step of handleGetHistory gets to exist on the happy path. In many instances, I could just call this done.

Servant actually provides a typeclass for natural transformations which abstracts this away. It has a challenging type signature, but it is pretty nice and I recommend taking a look at it.

Type Constraints

Code for this section

I use type constraints as my preferred method for solving this problem. The idea behind it is that I try to have only one concrete monad stack anywhere in the application.

A "type constraint" is a mechanism by which I declare that a context must implement a particular typeclass, but that the context could be any context that implements that typeclass. A trivial example would be like this:

printSomeStuff :: (Show a, MonadIO m) => a -> m ()
printSomeStuff a = do
    liftIO $ putStrLn $ show a

This function will print out any value, so long as the value implements Show and so long as the function is called in any monad that implements MonadIO. For instance, all three of these calls to printSomeStuff are valid:

run1 :: IO ()
run1 = printSomeStuff "abcd"

run2 :: ExceptT String IO ()
run2 = printSomeStuff "abcd"

run3 :: MonadIO m => m ()
run3 = printSomeStuff "abcd"

Now, we build up on this concept, and to do so I'm going to repack all three of my get functions, this time starting from the simplest possible implementation.

saveTimeDistance :: Maybe SampleID -> TimeDistance -> AppContext -> IO (Either HealthException a)

handleSaveTimeDistance :: Maybe SampleID -> SetTimeDistanceParameters -> WebM (Sample TimeDistance)
handleSaveTimeDistance sampleId params =
    let workoutFromParams = undefined
        workout = workoutFromParams params
    in do
    WebContext{..} <- ask
    res <- liftIO $ saveTimeDistance sampleId workout app
    case res of
        Left err -> throwError $ AppExc err
        Right val -> pure val

saveTimeDistance can function in any monad that implements MonadIO, so the first thing I will do is to abstract that away:

saveTimeDistance :: (MonadIO m) => Maybe SampleId -> TimeDistance -> AppContext -> m (Either HealthException a)

handleSaveTimeDistance :: Maybe SampleID -> SetTimeDistanceParameters -> WebM (Sample TimeDistance)
handleSaveTimeDistance sampleId params =
    let workoutFromParams = undefined
        workout = workoutFromParams params
    in do
    WebContext{..} <- ask
    res <- saveTimeDistance sampleId workout app
    case res of
        Left err -> throwError $ AppExc err
        Right val -> pure val

This detaches me from a particular monad stack. This function can now be called as-is from any context that implements, hence in the above code, I no longer need to apply liftIO to saveTimeDistance. For bookkeeping, and because I am going to build upon this abstraction, I will give that type constraint a name:

type HealthM m = MonadIO m

saveTimeDistance :: HealthM m => Maybe SampleID -> TimeDistance -> AppContext -> m (Either HealthException a)

The next step requires a fairly large jump. I want to eliminate that AppContext parameter. It is required for every function in the health application, so it would be nice if I could pass it as part of a MonadReader. The naive solution would be to just do this:

type HealthM m = (MonadIO m, MonadReader AppContext m)

saveTimeDistance :: HealthM m => Maybe SampleID -> TimeDistance -> m (Either HealthException a)

Unfortunately, this actually is of detriment in the caller. If the caller has its own context in a MonadReader, that context is not likely to be the same as this one. The result is code that looks like this:

handleSaveTimeDistance :: Maybe SampleID -> SetTimeDistanceParameters -> WebM (Sample TimeDistance)
handleSaveTimeDistance sampleId params =
    let workoutFromParams = undefined
        workout = workoutFromParams params
    in do
    WebContext{..} <- ask
    res <- runReaderT (saveTimeDistance sampleId workout) app
    case res of
        Left err -> throwError $ AppExc err
        Right val -> pure val

I definitely do not want to be going in the direction of having to re-add a run function stack, but that is how this goes. The caller has to explicitely pull out the context for this call.

In order to get around this, I have to think a bit differently. I still want an implicit context of AppContext. But, really, the context could be larger so long as AppContext is present in it. So an alternate solution looks like this:

type HealthM r m = (MonadIO m, MonadReader WebContext m)

saveTimeDistance :: Health r m => Maybe SampleID -> TimeDistance -> m (Either HealthException a)
saveTimeDistance = undefined

handleSaveTimeDistance :: Maybe SampleID -> SetTimeDistanceParameters -> WebM (Sample TimeDistance)
handleSaveTimeDistance sampleId params =
    let workoutFromParams = undefined
        workout = workoutFromParams params
    in do
    res <- saveTimeDistance sampleId workout
    case res of
        Left err -> throwError $ AppExc err
        Right val -> pure val

In some ways, this looks better. The caller now can simply treat saveTimeDistance as part of WebM. But now saveTimeDistance becomes aware of WebContext, and so is beholden to a single caller. This is better, but not good enough.

What I want is a way to specify that saveTimeDistance can take any context, so long as that context provides me with a way to extract the AppContext. So, this is a constraint upon a constraint, and it ends up looking like this:

type HealthM = (MonadIO m, MonadReader r m, HasHealthContext r)

Basically, a HealthM function can take any MonadReader that provides r, so long as r "has a health context".

My library gets to declare the HasHealthContext interface. The caller needs to implement that interface for its own context.

type Health r m = (MonadIO m, MonadReader r m, HasHealthContext r)
class HasHealthContext ctx where
    hasAppContext :: ctx -> AppContext

WebContext = WebContext { config :: Config, app :: AppContext }
instance HasHealthContext WebContext where
    hasAppContext WebContext{..} = app

saveTimeDistance :: Health r m => Maybe SampleID -> TimeDistance -> m (Either HealthException a)
saveTimeDistance _ _ = do
    appCtx <- hasAppContext <$> ask
    ...

With similar improvements made to getWeights, getTimeDistance, and getSteps, handleGetHistory also gets much nicer, and that demonstrates exactly what we wanted to begin with:

handleGetHistory :: Interval UTCTime -> WebM ([Sample Weight], [Sample TimeDistance], [Sample Steps])
handleGetHistory interval = do
    WebContext{..} <- ask
    weightRes <- getWeights interval
    timeDistanceRes <- getTimeDistance interval
    stepRes <- getSteps interval

    case (weightRes, timeDistanceRes, stepRes) of
        (Left err, _, _) -> throwError $ AppExc err
        (_, Left err, _) -> throwError $ AppExc err
        (_, _, Left err) -> throwError $ AppExc err
        (Right weights, Right timeDistances, Right steps) -> pure (weights, timeDistances, steps)

getWeights :: Health r m => Interval UTCTime -> m (Either HealthException a)
getWeights = undefined

getTimeDistance :: Health r m => Interval UTCTime -> m (Either HealthException a)
getTimeDistance = undefined

getSteps :: Health r m => Interval UTCTime -> m (Either HealthException a)
getSteps = undefined

Looking forward

Not quite there yet. We still have some tedium with exception handling to do. In this system, any thrown SeriesExc must be caught and then re-wrapped in the HealthException in order for the application to typecheck and for the exception to propogate upwards. This sort of tedium likely drove the creation of extensible IO exceptions, which I view as unchecked and undocumented parts of the type signature.

So, the next step will be to abstraction the exception throwing mechanism. Look for that in coming weeks.

A Nazi Sympathizer in the White House

Posted on Wed Aug 16 16:00:00 UTC 2017

The President of the United States is a Nazi sympathizer and a white supremacist. He is a traitor to the nation.

The Republican party wants to pretend that they didn't know this. But Clinton warned us a year ago. He won anyway. 62 million people wanted a white supremacist rather than a woman.

I am but one white, trans, queer, able-bodied, priveleged voice howling in the wind. Far smarter people than me know precisely how we got to this point. Far smarter people than me know that we never really left this point.

So here are real, concrete actions that will push back this tide.

Listen to black people. Listen to queer people. They never stopped tell us how bad things are.

Black Lives Matter.

Haskell Application Monad

Posted on Fri Jul 14 16:00:00 UTC 2017

We want to get productive in Haskell very quickly. Most non-trivial applications will have configuration, connections to the outside world, can hit exceptional conditions, and benefit from having their operations logged. If your application has sensible logs at both high an low levels of detail, your devops team will thank you and your life of debugging a production application will be a happier one.

I want to get all of these things at once, and so it would be nice to provide a nearly boilerplate application stack that provides them all. I define the "application stack" as a group of attributes that contain the context and all of the common behaviors for an application. In Haskell, you do that with a monad stack, though work on extensible effects shows a great deal of promise and has been used to great effect in Purescript.

That said, I use monads and monad transformers, and I'll not explain either of them today. I feel that the best explanation is a non-trivial example implementation, which I will do in a future article, or refer you to a better tutorial.

While most of this article explains the process, the final result is this application stack, which may be all you need if you are already familiar with building monad transformer stacks.

data Context = Context { contextRoot :: FilePath } deriving Show

data AppError = AppError deriving Show

newtype AppM a = AppM (LoggingT (ReaderT Context (ExceptT AppError IO)) a)
    deriving ( Functor, Applicative, Monad, MonadIO
             , MonadError AppError, MonadReader Context, MonadLogger)

runAppM :: Context -> AppM a -> IO (Either AppError a)
runAppM ctx (AppM act) = runExceptT (runReaderT (runStderrLoggingT act) ctx)

The most basic stack

Almost every application needs IO. In Haskell it is difficult to do IO on top of anything (see MonadBaseControl for way), so I always put it at the bottom of the monad stack. A trivial application stack would look like this:

newtype AppM a = AppM (IO a) deriving (Functor, Applicative, Monad, MonadIO)

This is so trivial you will likely never do it, though it can be helpful in that it prevents confusion between your functions and system IO functions. Still, let's build out what you need to make this work.

First of all, you do want AppM to be a monad, and you will need MonadIO in order to actually run IO operations. The primary use that I have for Monads in an application is to eliminate the boilerplate involved with a lot of threading context through a series of function calls. More to the point, though, you cannot get MonadExcept, MonadReader, or MonadLogger into this stack without having Monad to begin with.

newtype AppM a = AppM (IO a)
    deriving (Functor, Applicative, Monad, MonadIO)

runAppM :: AppM a -> IO a
runAppM (AppM act) = act

runAppM is the function that connects your application stack to the Haskell IO stack. This is everything you need in order to create a stack: the stack itself and the runner. Now let's see it in action:

data Image = Image deriving Show

loadImage :: FilePath -> AppM Image
loadImage path = do 
    liftIO $ putStrLn $ "loadImage: " <> path
    pure Image
     

main :: IO ()
main = do
    res <- runAppM $ do
        img1 <- loadImage "image.png"
        img2 <- loadImage "image2.png"
        pure (img1, img2)
    print res

Injecting your context

IO a is too simple to make much sense. The whole point of having a stack is to unify a lot of effects within a common framework of behavior and with a common context. So, next we load and add a context.

In almost every circumstance, your context is read-only. This points us directly to ReaderT, since you will want to be able to ask for the context but never write back to it. Application state would seem like a thing that you would want to include, if your application stores state. I have generally found that it is easier to keep application state in something that is strictly IO, such as an IORef or a TVar. For now, we shall skip that.

So, change your stack to look like this:

data Context = Context { root :: FilePath } deriving Show

newtype AppM a = AppM (ReaderT Context IO a)
    deriving (Functor, Applicative, Monad, MonadIO, MonadReader Context)

runAppM :: Context -> AppM a -> IO a
runAppM ctx (AppM act) = runReaderT act ctx

The addition of MonadReader means that now you can call ask within your function to get back the context, and you don't have to explicitely pass the context in. The remaining functions get updated like so:

loadImage :: FilePath -> AppM Image
loadImage path = do
    Context{..} <- ask
    liftIO $ putStrLn $ "loadImage: " <> (contextRoot </> path)
    pure Image

loadContext :: IO Context
loadContext = pure $ Context { contextRoot = "/home/savanni/Pictures/" }

main :: IO ()
main = do
    ctx <- loadContext
    res <- runAppM ctx $ do
        img1 <- loadImage "image.png"
        img2 <- loadImage "image2.png"
        pure (img1, img2)
    print res

Suddenly, everything in Context is available to every function that runs in AppM. You get the local effect of global parameters while still getting to isolate them, potentially calling the same functions with different contexts within the same application.

Add exception handling and logging

Exceptions happen. The Haskell community is split between what I call explicit vs. implicit exceptions. In short, implicit exceptions are not declared in the type signature, can happen from any function, and can only be caught in IO code. Explicit exceptions are explicitely stated in the type signature and can be caught just about anywhere. I prefer them for all of my application errors. I'll give exception handling further treatment in a future article, and will show the use of explicit exceptions here.

Logging is almost always helpful for any application that is not of trivial size. And, once present, it can replace print for debugging, allowing debugging lines to remain present in the code for those cases when something starts going wrong in production.

First, the new application stack:

data AppError = AppError deriving Show

newtype AppM a = AppM (LoggingT (ReaderT Context (ExceptT AppError IO)) a)
    deriving ( Functor, Applicative, Monad, MonadIO
             , MonadError AppError, MonadReader Context, MonadLogger)

runAppM :: Context -> AppM a -> IO (Either AppError a)
runAppM ctx (AppM act) = runExceptT (runReaderT (runStderrLoggingT act) ctx)

This gets quite a bit more complicated with both the Logging and Exceptions being added. Remember that I use the term "stack" here, and each monad transformer involved represents another layer in the stack. When running the stack, you must peel off each layer in reverse order. I will illustrate with some types:

*Json> :t loadImage "img.png"
loadImage "img.png" :: AppM Image

*Json> :t unAppM $ loadImage "img.png"
unAppM $ loadImage "img.png"
  :: LoggingT (ReaderT Context (ExceptT AppError IO)) Image

*Json> :t runStderrLoggingT $ unAppM $ loadImage "img.png"
runStderrLoggingT $ unAppM $ loadImage "img.png"
  :: ReaderT Context (ExceptT AppError IO) Image

*Json> :t runReaderT (runStderrLoggingT $ unAppM $ loadImage "img.png") ctx
runReaderT (runStderrLoggingT $ unAppM $ loadImage "img.png") ctx
  :: ExceptT AppError IO Image

*Json> :t runExceptT $ runReaderT (runStderrLoggingT $ unAppM $ loadImage "img.png") ctx
runExceptT $ runReaderT (runStderrLoggingT $ unAppM $ loadImage "img.png") ctx
  :: IO (Either AppError Image)

The point of this is that in runAppM, the type of act is the entire stack, and the first thing to be called to begin unwrapping is runStderrLoggingT, then runReaderT, and finally runExceptT.

Notice, also, that the final type of runAppM has changed to IO (Either AppError a). runAppM will now return whatever exception gets thrown from within the context it is running, no matter where that exception is thrown, if that exception is thrown with throwException. Exceptions thrown with throw end up being the implicit exceptions I referred to, and those require some extra handling.

So, here is the rest of the code. In the places where I used to print output, I am now logging output. Note that the loggers require TemplateHaskell and have slightly odd syntax, but are otherwise nearly identical to print.

data Image = Image deriving Show

loadImage :: FilePath -> AppM Image
loadImage path = do
    Context{..} <- ask
    $(logInfo) (T.pack $ "loadImage: " <> (contextRoot </> path))
    pure Image

loadContext :: IO Context
loadContext = pure $ Context { contextRoot = "/home/savanni/Pictures/" }

main :: IO ()
main = do
    ctx <- loadContext
    do  res <- runAppM ctx $ do
            img1 <- loadImage "image.png"
            img2 <- loadImage "image2.png"
            pure (img1, img2)
        print res

    do  res <- runAppM ctx $ do
            img1 <- loadImage "image.png"
            throwError AppError
            img2 <- loadImage "image2.png"
            pure (img1, img2)
        print res 

This is the output from running main:

*Json> main
[Info] loadImage: /home/savanni/Pictures/image.png @(main:Json /home/savanni/src/haskell/src/Json.hs:76:7)
[Info] loadImage: /home/savanni/Pictures/image2.png @(main:Json /home/savanni/src/haskell/src/Json.hs:76:7)
Right (Image,Image)
[Info] loadImage: /home/savanni/Pictures/image.png @(main:Json /home/savanni/src/haskell/src/Json.hs:76:7)
Left AppError
*Json> 

So, the first block starting with do res <- runAppM runs to completion, returnin two images. The second block, runs loadImage for the first image, but then hits throwError and returns Left AppError, discarding the first image and not loading the second image at all.


This is nearly a application stack that I have used for more applications than I can count. Even if you need only one feature, such as exceptions, starting with a small stack hidden behind an application monad makes it very easy to add additional features as you need them, without needing to change the rest of your code. This pattern is trivial to extend, or contract, as needed, and so I think it starts every application on a good path.

Reflections

Posted on Mon Jul 10 12:00:00 UTC 2017

I left CollegeVine a little over two weeks ago. Since then, I have gotten myself really busy. For a moment tonight I am going to slow down to reflect on my time since leaving.

First of all, I had an obligatory Friday of just not doing anything. But by Saturday, I got to work. In the last two weeks I have...

  • 86 commits to 4 repositories
  • representing a new library that puts some workflow around JWTs for capability or token based authentication
  • extracting an existing module into its own library
  • converting the authentication scheme of two applications to use the capability tokens
  • build out a Javascript, React, Redux frontend for my health application, comprising of the bulk of my time, and 1000 lines of code
  • multiple interviews with three different companies and a starter interview with a fourth
  • laziness and playing during the 4th of July
  • I got engaged!!!

Poly Trans Lesbian(ish) Triad

Quite the mouthful!

I moved to Boston to be close to my partners, Cait and Leah. For the last year we have all danced around and away from the topic of marriage.

Finally, though, I decided I was ready, and so on the 3rd of July I asked the two of them to marry me. They both said yes.

Obviously we shall have no legal recognition. I don't even have much expectation of religious recognition. The only thing we will be asking for is that the community around us recognize our relationship, support us, and hold us accountable to the vows that we will make.

We will announce the date when we figure it out, but we may not even begin discussing the date until late this year.

Coding without restriction

This is rather a misnomer. I start putting restrictions on my code before I even start typing the first line. I have certain habits and discipline that come from all my years of coding. Some of those habits lead me to immediately start trying to constrain the scope of what I'm writing.

But, in fact, almost everything I worked on over the last few weeks had pre-existing code that I had to work with.

At the same time, I had incredible velocity, primarily because I understood the systems I worked in. I have a history with those systems, having grown some of them since inception. I also have at least a modicum of automated tests on everything except for my APIs. Most importantly, though, since I understand the system I have no fear of breaking unrelated components. I can make changes and deliberately break things, because I either know the code or I have written automated tests that will help me detect flaws.

Obviously, when I start my next job, I need to use absolutely every tool at my disposal to learn the next system I walk into as quickly as possible. No slow "do some projects and learn by osmosis".

  • map every data structure I can find and understand
  • find every API endpoint
  • read every automated test
  • build automated tests as soon as I find untested corners
  • pair with other people on their code
  • have people pair with me on my tasks
  • document anything that feels undocumented

Learning

When I am stressed, when I feel most pressured to get something done, I can't learn. Anything I learn must be in direct service of immediate needs, preferably to the point that I can simply copy-and-paste a new idea in and tweak it a bit to make it fit. This meant that I did not want to take the time to learn anything because I really needed to get the code working and the feature shipped.

When I relax, and have a project in mind, and a bit of extra time, I can learn a lot and move very quickly in my work. I barely knew any Javascript or React, and I knew no Redux, when I started. Yet I shocked a Javascript expert I know when she saw just how far and how quickly I could move.

This should have been obvious. It wasn't. Now I know.

Haskell

Make no mistake, I love Haskell. But I do not necessarily love the Haskell community. I have met some decent people there, but I have also seen bro-level toxicity. I think that Haskell has a lot of elitism around it, and that attracts the kind of human beings I most dislike. As such, since CollegeVine was probably the least toxic Haskell environment I have encountered, I will not seek another Haskell job until I can create it out of a team of junior programmers who are really excited to learn new things.

Moving forward

I am pretty sure that I am done with interviews with one company. I have additional interviews with two others and think that the fourth will not turn into anything. Some of the positions are really compelling. So, I have an exciting future coming up, and I hope to actually have a new employement contract signed before I have been out of work for a full month.

We shall see. But the future looks bright.

Configuring your Haskell application

Posted on Mon Jun 26 19:30:00 UTC 2017

One way or another, you are going to need to configure your Haskell application, and for that you have three major ways of doing it. I recommend choosing one and sticking to it. You can choose multiple ones, but it is important that you minimize one of them in order to keep yourself out of the mind-numbing tedium of consistently combining multiple input parameter sets and their overrides.

Your options tend to be...

  • CLI Option parsing

    I recommend this for small utilities, especially those which you are going to run frequently and with a variety of configurations.

  • Configuration files

    This is generally my preferred way of running an application. You'll still need to do a little bit with option parsing, but only enough to get a configuration. However, it can be a total pain to need to edit a file to change the configuration for a utlity, so use this for your longer-running applications.

  • Environment variables

    This is not generally how I want to configure an application, but some environments, such as Heroku, make it the easiest way.

CLI Option Parsing

The most important rule of parsing options from the CLI is...

*Don't write your own CLI parsing library.*

I have made this mistake. It is no longer on the internet. Do not do what I have done. Do this instead.

For particularly simple parameter parsing, you don't need any libraries. For example I have a tool that I use on occasion to reformat an m3u playlist for my phone. Rhythmbox exports the playlist in an m3u format, but with all paths that don't work for my Android phone. A tool like this is so simple that the only parameters to it are the input file and the output file.

In fact, the tool is so simple that it may have been better for me to accept the input data on standard in and emit the output data on standard out. Please forgive me for that, too.
import           System.Environment (getArgs)


main :: IO ()
main = do
    (source:dest:_) <- getArgs

That is the simplest way. However, you may wish to be kind to your users...

main :: IO ()
main = do
    args <- getArgs
    case args of
        (source:dest:_) -> {- do your thing! -}
        _ -> print "Run the application with the source and destination files."

This is your standby for applications with very simple parameters, and these applications are quite common. However, more complex configuration is often needed. For that, resort to Optparse-Applicative. This will give you command line options that are very similar in power to the one available in Go.

The tutorial covers basically everything, but here's a starter example:

cliParser :: Parser Config
cliParser = Config <$> option auto (long "interval" <> help "number of seconds between samples" <> value 5)
                   <*> strOption (long "log" <> help "log output file")
                   ...

main = do
    Config{..} <- execParser (info (helper <*> cliParser)
                             (fullDesc <> progDesc "description of the program"))

Look here for a summary of the functions and typeclasses involved above. The entire block around execParser is basically boilerplate code, and all of the interesting bits happen inside cliParser.

This technique is as common as mud. As an administrator, I do like to pass parameters to my applications, but I dislike services that require excessively long command lines to run. If your application requires more than four or five parameters, or if the parameters rarely change from one run to the next, look to the next section for configuration files, instead.

Configuration Files

For almost all of my configuration needs, I like to go with a file on the disk. I usually put it into a Yaml format, because that allows some complex nested configurations and saves me from needing to write a configuration parser myself.

For my example, I will demonstrate with a program that I use for my HDR processing toolchain. The program has to go through several steps, and basically it needs these parameters:

  • Do I need to align the photographs?
  • What are my input files?
  • What white balance parameters should I use for developing the files?

and so forth. These are the most important parameters. A typical file looks like this:

wb: camera
project: lake-travis-dam
sources:
- _DSC3656.dng
- _DSC3657.dng
- _DSC3658.dng
- _DSC3659.dng
- _DSC3660.dng
align: false
fanout: false

So, first I want a data structure to store this:

data WhiteBalance = Camera | Auto

data Project = Project {
      sources :: [String]
    , project :: String
    , wb :: WhiteBalance
    , align :: Bool
    , fanout :: Bool
    }
    deriving (Show)


instance Default Project where
    def = Project [] "" Camera False False

(incidentally, I like having defaults for my structures, if I can concieve of a reasonable default)

Whether Yaml or JSON, in Haskell I need a FromJSON instance for parsing this file:

instance FromJSON Project where
    parseJSON (Object obj) =
        Project <$> obj .: "sources"
                <*> obj .: "project"
                <*> obj .: "wb"
                <*> obj .: "align"
                <*> obj .: "fanout"
    parseJSON obj = fail $ show obj

instance FromJSON WhiteBalance where
    parseJSON (String str) =
        case str of
            "camera" -> pure Camera
            "auto" -> pure Auto
            _ -> fail $ "invalid wb string: " ++ T.unpack str
    parseJSON (Object obj) =
        WhiteBalance <$> obj .: "temp"
                     <*> obj .: "green"
    parseJSON obj = fail $ show obj

aside: I use fail instead of mzero or mempty because propogating out any error message at all helps immensely with debugging. I wish I could use throwError, but MonadError is not implemented for Parser.

-- now include code for reading JSON format and Yaml format

Environment Variables

While I do not particularly like using environment variables for configuration an application, Heroku and presumably some other services require their use. On the other hand, most languages treat environment variables as a simple dictionary, making them simple to retrieve. Haskell is no exception to this. The only catch is that nested structures require a little more effort to build.

Your workhorse function is System.Environment.getEnv :: String -> IO String. The function will return the value if present, or throw an IO exception if it is not present. Since you may sometimes want to make the variable optional, so, here is a function that will capture isDoesNotExistError and translate it into a Maybe:

maybeGetEnv :: String -> IO (Maybe String)
maybeGetEnv k = (Just <$> getEnv k) `catch` handleIOExc
    where
    handleIOExc exc
        | isDoesNotExistError exc = pure Nothing
        | otherwise = throw exc

Then write your configuration function like so:

import Data.List.Split (splitOn)

loadConfiguration :: IO Config
loadConfiguration = do
    p <- getEnv "PROJECT_NAME"
    s <- splitOn "," <$> getEnv "SOURCES"
    align <- maybe False read <$> maybeEnv "ALIGN_IMAGES"
    fanout <- maybe False read <$> maybeEnv "FANOUT_EXPOSURES"
    pure $ Config s p Camera align fanout

These are your three major methods for configuring an application. Many applications will permit a certain degree of hybridization between them, but I think it is best to minimize that as much as possible. For instance, a command line parameter to specify the path to a configuration file. Doing it in the general case, handling command line parameters, defaults, configuration options, and environment variables, has typically lead to a very difficult-to-use mess, and I have regretted such attempts.

Whichever method you use for passing configuration in, you'll then want to wrap that configuration up into a context for your application. I will hint more on that in my next article, on the application monad, and give it significantly more detailed treatment later on.


Questions? Comments? Feedback? Email me. I am particularly interested in places that you feel are unclear or which could use better explanation, or experiments you have run that turned out better.

Haskell in Production

Posted on Mon Jun 26 19:00:00 UTC 2017

Synthesizing some hard lessons

Anyone who knows me knows that I love Haskell as a programming language. I have used it professionally in several different projects, and I have used it casually in a huge number of projects. While most of my projects have never seen the light of day, several of them have made it that far at varying degrees of quality.

Production code in Haskell is rare. There are lots of experiments, proofs of concept, and academic researches out there, but very few complete applications that we get to see. So there are very few instances that we get to see a successful application, much less get an explanation of how it works.

Since Haskell is so strict in its effect management, it can be very difficult and tricky to assemble an application that can do what what all of your Python/C/Java/Ruby languages have instantly. This by itself makes Haskell much less likeable. You get the benefits of strict type checking and effect management, but then it becomes difficult to make different effects systems work together, and it becomes difficult to build a non-painful effects system.

Over the last six months, I have finally come to understand a group of patterns that serve very well in production code. These are patterns that I would reach for instantly any time I start a new application or library. As quickly as possible I want to get to the core of my code, so it would be helpful to have an operational framework in place from the beginning that I will not later regret having.


What will follow is a series of articles discussing patterns to use in setting up new Haskell applications and libraries. At all times I will welcome questions and feedback so I can make the articles better or add supplementary articles to explain sections that I may gloss over in the first pass.

This series is not for advanced users or type theorists. This is for relentlessly pragmatic users who feel that Haskell is a great tool but really need to get things done very quickly without stumbling on every little nook and cranny of the language as they go.

Older Articles

Purescript and Haskell builds in Nix

Posted on Tue Feb 7 13:00:00 UTC 2017

Stack and purescirpt do not play together particularly nicely. However, a Haskell developer working on Nix needs to be able to make them play together.

There are a couple of major issues to work around. One of them is a ghc bug which you need to appease regarding library paths.

I don't fully understand the library path bug, but in some way or another, it appears that GHC does not honor a particular parameter, which Nix uses to notify GHC of the active library path. However, GHC does honor LD_LIBRARY_PATH. Nix provides a way, pkgs.stdenv.lib.makeLibraryPath to convert a list of build inputs into a library path specific to that build.

  LD_LIBRARY_PATH = pkgs.stdenv.lib.makeLibraryPath (buildInputs);

(the parenthesis around buildInputs may be optional... I'm not really good with Nix syntax yet.)

Additionally, Stack and Cabal (and, I think, Node) depend on the user's home directory, but nix package builds happen in a homeless user, and so the cache directory that all three systems depend on doesn't work. However, it turns out you can solve that simply by setting the HOME environment variable before building.

{ env, pkgs, ghc }:
let
  pname = "purescript";
  pversion = "0.10.3";
  url = "https://github.com/purescript/purescript/archive/v${pversion}.tar.gz";
  sha256 = "46c3f695ccc6e7be3cb2afe1ea9586eafdf51a04f1d40fe7240def0d8693ca68";

in env.mkDerivation rec {
  name = pname;
  version = pversion;

  src = pkgs.fetchurl {
    url = url;
    sha256 = sha256;
    name = pname;
  };

  buildInputs = [ pkgs.stack
                  pkgs.haskell.compiler."${ghc}"
                  pkgs.ncurses5
                  pkgs.zlib
                  pkgs.haskell.packages."${ghc}".alex
                  pkgs.haskell.packages."${ghc}".happy
                ];

  # copied from generic-stack-builder.nix in the nixpkgs repository
  # workaround for https://ghc.haskell.org/trac/ghc/ticket/11042
  LD_LIBRARY_PATH = pkgs.stdenv.lib.makeLibraryPath (buildInputs);

  buildCommand = ''
    export HOME=/tmp/stack

    tar -xf $src
    cd purescript-0.10.3
    mkdir -p $out

    stack build --system-ghc --local-bin-path=$out/bin --copy-bins --allow-different-user
  '';
}

Resistance

Posted on Wed Jan 18 03:30:00 UTC 2017

For the next ten days, my resistance against fascism is to sit with an enby who has just undergone surgery. The kind of person and surgery that self-proclaimed "decent folk" believe is an abomination worthy of death.

If you are one of them, fuck off.

A Quiet Walk, Interrupted

Ice Flow, 2017-01.web

Yesterday, my friend and I were out walking along the Delaware Canal Towpath in Morissville, Pennsylvania, having a lovely day. Part of the canal is still frozen, and we made a hobby of seeing if we could break the ice using whatever rocks we had to hand. The sound of an ice sheet fracturing is really unique. Not precisely like glass shattering, because the fracture and stresses race down the ice sheet, even if the break only happens in a small area. I lost count of the number of significant rocks that I hurled into the ice that simply got embedded.

We are pretty much minding our own business when an old man walking the other direction starts demanding to know where we're going "dressed like that" and telling us "you're not real girls". He proceeded to hurl invectives after us for long after we could hear him. He even proclaimed just how great it must be to have wealthy parents who will support my "lifestyle".

When I was twenty, I clawed my way nearly to financial independence to be free of my parents rules. I've been on my own ever since. I wanted so much to walk up and tell him precisely how much I get paid for my skills at a job that keeps me in air conditioning. The idea of shaming him into silence was almost overpowering. I am a professional in my field, not even 40, nearly at the top, and I make more money than he likely ever has.

How pathetic. Long after we could no longer hear him, we could see him still yelling at us. He had nothing better to do with his sorry excuse for a life.

It is very difficult to walk away silently. I cannot help but feel that I accomplished nothing. That there is no victory to the high road. But that perhaps there is nothing to be accomplished in my reply.

I so rarely face actual transphobia on the streets. So rarely that I vacillate between being shocked when it happens, and shocked that nobody even looks sidelong at me, even when I'm in a small town. For the rest of our walk, we kept our eyes out. We have no idea what the old man may have done. Perhaps he called the cops on us, as has happened so often to people like us. If he did, they ignored him and we continued the rest of our day unmolested.

Decent Folk

So often, the narrative is that we are a threat to "Decent Folk". Somehow, trans queer reality is so powerful that all wholesome goodness breaks down around us.

Decent Folk assault people on the street.

Decent Folk poke their noses into other people's private lives.

Decent Folk remain silent when death camps rise.

Decent Folk vote for a fascist, rapist, traitor because he pedals lies of prosperity.

Decent Folk vote for one who promises more power to those with power.

Decent Folk are so easily duped with fascist lies.

Defiance

What is the queer agenda? STOP HURTING OUR FRIENDS! STOP HURTING OUR CHILDREN! LET US LIVE HEALTHY PRODUCTIVE LIVES IN PEACE!

We are the ones who take in those not our kin. Teenagers thrown out of their homes for being gay, lesbian, transgender, bisexual, asexual, intersex, polyamorous. Strangers moving into a city where they can be safer.

If we could, we would walk away from your fucking "decent" culture. We would separate ourselves and build a civilization of our own. We would interact with you only to rescue the queers who emerge amongst you.

But you won't permit that. You "Decent Folk" have all the power. So fuck you.

16 Years

Posted on Sun Jan 1 05:00:00 UTC 2017

Welcome to 2017!

In a few days I move away from Austin, likely never to return. I grew up in Round Rock. I went away for a few years, and then I returned here to begin my career and my adult life right during the dot-com crash. I'm actually a lot older than I look. Many people upon meeting me seem to assume that I'm 27 or so, when in fact I turned 38 late this year. "Wait, how old are you?"

I keenly feel the passage of time. I feel that I have not begun to approach what I wanted to accomplish by now. But, realistically, I have between 40 and 60 years left. The amount of time that I spent here... three more times. And a lot can change in 16 years.

16 years ago, I thought I was a straight man. I was married. I voted for Bush and thought the Republicans could run the country well. I was Catholic and believed the anti-abortion rhetoric, yet I somehow rejected the anti-gay rhetoric. Go figure. Though we knew a few gay men in college, it was shortly after we moved to Austin that my wife and I noticed for the first time pairs of men openly holding hands at formal "respectable" events. We began to feel a relief that this was the kind of safe city that we never really recognized we sought.

15 years ago, my wife and I decided to have a polyamorous relationship. She said that I had suggested it years earlier while we were dating. I did not remember saying that, but it felt like the kind of thing I might have. It was shortly after this, as I thought about love, romance, and relationships, that I began to believe that it was tragic that I was straight and not bisexual. I can remember being apologetic as I (very occasionally) turned a man down. And it was shortly after this that I understood that I was parting ways with the Catholic church... and I did not particularly regret that.

14 years ago, I found out how infidelity felt. Infidelity in a polyamorous relationship looks different than in a monogamous relationship, but it hurts the same. It cuts through hearts, rips out rugs, and crushes dreams.

I also learned that maybe the Republican party was actually made up of a bunch of chronic liars, and became a Democrat. Later I started to understand how violent and hateful Republicans could be. How did I never see this before? And maybe, just maybe, I shouldn't hold the reproductive health doctrines of men who want to ban abortion but also ban all other forms of contraception and all forms of sex that carry no chance of pregnancy while simultaneously starting a war and lying to me about weapons of mass destruction!

12 years ago I joined a company that became my career for the better part of a decade. They weren't great... in fact sometimes they were downright awful, but over time my authority became vast, as did my knowledge of everything about the business... except what was in the best interest of the business. Ya know, sometimes we techies need to be informed of the big business direction so we can make decisions intelligently.

10 years ago, with the onset of Saturn Returns, I finally figured out that I was not a man. That moment has lead me through so many changes and to so many of the people that I find so important in my life now. As a man, I would never have made any of the connections I have as an androgyne. This realization sometimes keeps me awake at night, knowing that it is by the grace of but a few words that I have in my life the love that I experience now. More rationally, a few of my current friends would have been my friends anyway, and they would have noticed my egg tendencies, and they would have aided in my hatching. I may have ended up exactly where I am now, on a different schedule.

Letting go of my own gender also let me release my expectations about my sexual orientation. Reparative therapy, especially religious-based "therapy", is bullshit. We know this. And yet, I successfully "prayed the straight away"!

I also gave up on "til death do we part" and let my marriage end.

Five years ago, I learned photography, and I changed how I see the world. Always watching for that perfect moment. Seeing textures. Analyzing light. Understanding focus and freezing motion. The speckled shadow beneath a canopy. The shimmer of a cobweb five meters up and at least that far away.

Three years ago, I talked myself out of my first suicide attempt. In the aftermath, I evaluated my life. I saw clearly how I was wasting it on my employer's amazingly small dreams, and I chose to spend some time quite alone. I loved living out in the woods. I hated having to drive for twenty minutes to reach the closest decent internet connection, and for an hour to reach any of my friends. But there is a lot to be said for the peace of the forest, for stars so bright as to light the ground, for rain on the metal roof a mere meter from my lofted bed... and for really cheap rent paid in cash under the table. Oh, and did I mention that my landlady also covered electricty? Pretty epic, especially since the cabin wasn't well insulated and I had to run 2.5kW of heating that winter.

In the last two years, I have truly started to learn how black lives matter, and how little I understood my own racism in the past. I have learned about social justice, and become keenly aware of my failings. I have gained true confidence in my skills, and become comfortable in my body for the first time in my life. I have felt my socialist/anarchist heart begin to blossom as I notice the Democratic party repeatedly snatch defeat from the jaws of victory.

And, shortly after my birthday in 2014, I met the woman who has become the love of my life. She had to exercise both persistence and patience. I was wounded and avoiding romance, sex, dating. She had to convince me that a lesbian, even a trans-friendly lesbian, could be interested in an androgyne who still had and wasn't particularly inclined to get rid of eir penis. But, she exercised that persistence, and she waited patiently, while over the course of months I fell in love and I healed. Now we talk of our sixty year plan.

I will miss Austin. I will miss the people here. I will miss all of my bike routes and the restaurants and the events. I will miss the familiarity. And I feel guilt, leaving all of you to stay and stand against the legislature.

But for this woman, where she goes my heart shall follow.

Nix Development Environments

Posted on Mon Nov 28 16:15:00 UTC 2016 by Savanni D'Gerinel

nix-shell, the command that creates a subshell after evaluating any nix expression, has a lot of uses. I found it very useful in my devops work when I had multiple environments to administer, but had to use different tools for each. The shell provides excellent help in isolating my required tools to the environments involved.

A trick, though, lay in learning how to acquire those tools when the tools were not available in the nixos channel. I figured it out, and so here is the example for one of the environments I was administering. Note that I include both Linux and Darwin builds, because I wanted to offer the nix environment to my replacement at the company.

  • Packer -- 0.10.1
  • Terraform -- 0.7.4
  • Ansible 2
  • Python 2.7

We were deploying in Amazon AWS. I used Packer to build the custom images that we were deploying. Autoscaling works a lot better if it has a complete image that only has to be started (the Crops, i.e., the systems that can be replaced almost instantly and thus do get replaced regularly). I love Terrafrom because I was able to describe everything I was doing in AWS using the tool. Ansible is present for those systems that get reconfigured regularly (primarily the Cattle machines, things that can be rebuilt from just the devops scripts, but that I do not want to terminate). Python 2.7 is present to support Ansible, though it is sometimes convenient to have at hand.

Neither Packer nor Terraform were available in my Nix channel, so I had to build derivations for those. The process is non-obvious until it is done. Here are my scripts for them. At the time I wrote these scripts, I was running NixOS 16.03, however I still use the same scripts after having upgraded to NixOS 16.09.

nix-deps/packer.nix

{ pkgs ? import <nixpkgs> {},
  stdenv ? pkgs.stdenv }:

let
  # suggestion from @clever of #nixos
  package =
         if stdenv.system == "x86_64-linux" then "packer_0.10.1_linux_amd64.zip"
    else if stdenv.system == "x86_64-darwin" then "packer_0.10.1_darwin_amd64.zip"
    else abort "unsupported platform";
  checksum =
         if stdenv.system == "x86_64-linux" then "7d51fc5db19d02bbf32278a8116830fae33a3f9bd4440a58d23ad7c863e92e28"
    else if stdenv.system == "x86_64-darwin" then "fac621bf1fb43f0cbbe52481c8dfda2948895ad52e022e46f00bc75c07a4f181"
    else abort "unsupported platform";
in
stdenv.mkDerivation rec {
  name = "packer-${version}";
  version = "0.10.1";

  buildCommand = ''
  mkdir -p $out/bin
  unzip $src
  mv packer $out/bin/packer
  echo Installed packer to $out/bin/packer
  '';

  src = pkgs.fetchurl {
    url = "https://releases.hashicorp.com/packer/0.10.1/${package}";
    sha256 = checksum;
    name = package;
  };

  buildInputs = [ pkgs.unzip ];
}

nix-deps/terraform.nix

{ pkgs ? import <nixpkgs> {},
  stdenv ? pkgs.stdenv }:

let
  # suggestion from @clever of #nixos
  package =
         if stdenv.system == "x86_64-linux" then "terraform_0.7.4_linux_amd64.zip"
    else if stdenv.system == "x86_64-darwin" then "terraform_0.7.4_darwin_amd64.zip"
    else abort "unsupported platform";
  checksum =
         if stdenv.system == "x86_64-linux" then "8950ab77430d0ec04dc315f0d2d0433421221357b112d44aa33ed53cbf5838f6"
    else if stdenv.system == "x86_64-darwin" then "21c8ecc161628ecab88f45eba6b5ca1fbf3eb897e8bc951b0fbac4c0ad77fb04"
    else abort "unsupported platform";
in
stdenv.mkDerivation rec {
  name = "terraform-${version}";
  version = "0.7.4";

  buildCommand = ''
  mkdir -p $out/bin
  unzip $src
  mv terraform $out/bin/terraform
  echo Installed terraform to $out/bin/terraform
  '';

  src = pkgs.fetchurl {
    url = "https://releases.hashicorp.com/terraform/0.7.4/${package}";
    sha256 = checksum;
    name = package;
  };

  buildInputs = [ pkgs.unzip ];
}

The structure of each script is relatively straightforward.

  • declare that pkgs and stdenv are both required, as well as how to get them if they are absent
  • based on the OS, declare what package I want to download and the relevant checksum
  • declare the name and version of the derivation
  • create the custom build command

    In many cases, the default build commands works perfectly, but that only works for projects that have to be built with autoconfig or with Stack (and possibly some other languages). Both Terraform and Packer are binaries, and so it is necessary for me to specify the build for the derivation.

    In this case, the build is simply to unzip the downloaded package (specified in $src) and copy the executable into the destination (which has a root at $out). It is vital that the executable end up in the bin/ directory. I am not sure of the mandated directory structure of a derivation, but I know that derivations that did not include the bin/ directory would fail. I assume that they failed because there was no executable to add to the path.

  • specify precisely how to get the source package. In this case, through the fetchurl tool.
  • specify additional build inputs. These have to be somewhere in the nix namespace. pkgs.unzip refers to nixpkgs.unzip in the standard channel.

Both of the files above must go in a subdirectory. I named the subdirectory nix-deps/. Some subtle interaction will cause an infinite recursion if the two files are included in the root directory of your project.

With those present, it is time to build the nix-shell command:

shell.nix

let
  pkgs = import <nixpkgs> {};
  stdenv = pkgs.stdenv;
  terraform = import nix-deps/terraform.nix {};
  packer = import nix-deps/packer.nix {};

in stdenv.mkDerivation {
  name = "v2-devops";

  buildInputs = [ pkgs.ansible2
                  terraform
                  packer
                  pkgs.python
                  pkgs.python27Packages.alembic
                  pkgs.python27Packages.boto
                  pkgs.python27Packages.psycopg2
                  pkgs.awscli
                ];
}

The only difficult part here was for me to figure out how to import my Terraform and Packer derivations. I handle that with the import nix-deps/<package>.nix {} lines. The result of each import statement is a derivation, and so it is valid to include in buildInputs.

buildInputs again just lists the packages that must be included in this derivation. So, I included all of the packages that I use directly.

Thus, from the root directory of my devops folder, I can simply run nix-shell and have exactly the version of Terraform, Packer, Ansible, and Python that I want. This also means that I can have completely different versions for a different devops repository (I was actually administering three different clouds, all with different standards). And, possibly best of all, if I could convince my co-workers to use Nix (the tool, not the operating system), they would have had a trivial way to set up their development environments, also.