Progress on Illuminance

Savanni D'Gerinel 16 May, 2014

I started the Illuminance project back in February of this year. Immediately after my last contract ended, I started working on the code in order to make this application real.

Three months in, I both exault in everything I have learned, and despair at how much remains to be done. This looked like such a simple problem, yet in its solution I have pursued so much new information that I constantly swim underneath a bare understanding of all that I have learned.

Where am I now?

User Interface

I have a very basic user interface that allows me to load images, see their previews, see error messages that occur during load, and run a “naive” HDR calculation. Error messages get displayed in the location where an image preview would normally be. While this is not necessarily elegant, it allow me to display an error that is specific to one image, since a lot of things happen in parallel and in other threads.

Generally, no expensive operation occurs in the main gui thread, so the gui does not ever actually hang. I have not been able to test this exhaustively and I am sure that I will find places where I am wrong, but this does certainly seem to be the case.

Images load in millisecond time, even very large images. They also load with reasonable space usage. I had some great success with creating my own image codec library and representation, but then I discovered that a combination of JuicyPixels and Repa solve the problem better by virtue of being more complete.

On the HDR Calculation

I have spent a vast amount of time learning math in the hopes that I could figure out how to set up the HDR calculation. Ultimately, I read some Matlab code that somebody else wrote to figure out how to set up all of the terms for the problem.

I put all of that together, set up the matrices to solve, called into the LaPack sgelsd function to actually run the calculation… and got the wrong answer. I am stumped, however there is a lot that I can play with still. I have before solved problems where the wrong answer cannot be reverse engineered and where fixing the problem can only be done by carefully examining the inputs. I imagine that is what will be required here.

I do have what I call a naive calculation in the application. This calculation scales every pixel by the associated exposure value, and then averages the result. The end result is weird in the visual artifacts and doesn’t actually give me a high dynamic range anything. While I could improve this, all I wanted to do was to get something on the screen. I have since focused all of my efforts on getting the real calculation to work.

What needs to be done

Oh so much.

  • Get the HDR calculation actually working
  • Be able to save and load HDR files
  • Tonemapping to LDR
  • Making a nice GUI

sigh. In other words, what is done so far only barely scratches the surface of a complete application.

Diversion

In any event, I am diverting to a different task. Problems on Linux have left all of the current photo managers unusable, and have left me unable to even submit a bug report on the one that I’ve been using for several years, so I am scrambling to hack one out on my own. I have no idea whether this is going to work, because while I can handle most everything, there are GUI elements that I don’t know how to handle yet. We shall see. This does, however, put Illuminance on hold, but all of my photography is really on hold due to these bugs.

  • Shotwell – Ug. Doesn’t handle XMP files. Won’t update tags on a RAW file of any kind. Instead, squirrels tags away in some custom database in a hidden folder so that no other program can see them. This problem burned me very badly once, so I refuse to use the program again unles full XMP support gets added.
  • Digikam – An 8 million pound gorilla (requiring a full installation of KDE) that I’ve been using with good success for a couple of years. Writes all tags for RAW files into an XMP sidecar, so I haven’t lost a tag in years. Unfortunately, it is also crashing randomly with either the Abort signal (and no reportable stack trace) or a Segmentation Fault (which might be reportable as soon as I get symbols installed). I think the problem lies somewhere in the KDE libs, not in Digikam itself. Either way, the application is unusable and I have no way forward to fixing it.

So, maybe I can produce something a little lightweight. I’d been thinking of trying, anyway, but I had not intended to do so until Illuminance was done. Hopefully I will get results faster since this is just a GUI and data management problem.

Python Type Annotations

Savanni D'Gerinel 3 Jun, 2013

I have to admit, 40% of the reason I use Haskell lies in static type checking. Another 40% lies in strong, ironclad immutability. Only about 20% of my love of Haskell has anything to do with language expressiveness. Make no mistake, the language has expressiveness out the wazoo, but I truly love static typing and immutability.

One day recently turned into a disaster. One problem lead to another in a task that should have been trivial and instead involved four hours of beating my head against the desk. Part of the problem was that my system under test had only acceptance tests, executed only with all of standard out and standard error shunted away to /dev/null. Either way, after I got my changes made, I decided to step out of the office to think.

Python has expressiveness. It has neither static type checking or immutable values, and the language developers get really holy about this. I have no interest in arguing with them, as I believe they have decided to abandon all safety and flood the world with code that might blow up in production because maybe in some corner case that a STATIC TYPE CHECKER could have detected at compile time, they inadverdently passed an Integer to a function that expected a String. So, I will not change Python the language, but I do want to make things nicer. Even though my solution will not get checked at compile time, it can certainly make debugging easier when a function crashes immediately due to a blown data type assertion, rather than propogating that incorrect data type potentially quite some distance.

I have put the code in a Repository. You can get this particular file at type_checking.py

hg clone https://gitlab.com/savannidgerinel/python-tools

The gruesome way

You could do it like this. I have done this.

def authenticate (username, password):
    assert isinstance(username, types.StringType)
    assert isinstance(password, types.StringType)

    ... do a bunch of authentication stuff and talking to databases and many things that belong in the IO monad ...

    assert isinstance(ret, UserClass)
    return ret

Ew. It will work in a pinch… but ew. This gets especially bad if I have several places from which I can return from the function. Yes, it improves the readability of the input parameters, but it does little for the return parameter beyond making postconditions.

Slightly better

Assertions are things that “should never happen in code”. So, technically, an AssertionError is actually not a good thing to throw in the case of a type error. Python actually provides TypeError to indicate that a data type error has occurred. That is convenient. So, instead of calling assert, let’s create a function that will do the job and raise a better exception. And, let’s build in making the parameter optional.

def typecheck (param, type_, optional=False):
    if not optional and param is None:
        raise TypeError('None is not permitted for this value')
    if param is not None and not isinstance(param, type_):
        raise TypeError('Expected type %r, got %r' % (type_, type(param)))

With this, your above code would look like this:

def authenticate (username, password):
    typecheck(username, types.StringType)
    typecheck(password, types.StringType)

    ... do a bunch of authentication stuff and talking to databases and many things that belong in the IO monad ...

    typecheck(ret, UserClass)
    return ret

This doesn’t improve the code much, but it does make for more descriptive error messages. I’m rather liking this improvement. But I can do better.

Decorative rescue

I once read this round of pejoratives about static type users, and I wondered for a while what that meant. I looked things up, found a few references to using decorators to “decorate” type checks on functions, but I did not like the solutions. Maybe they were good solutions, but I wanted to solve it myself. Also, the typecheck module for Python appears to be almost seven years dead.

So I introduce a some code that I wrote in an hour yesterday.

First, I played a bit with the syntax, and then I put the syntax into a unit test. You do test your code, don’t you?

class TestTypeChecker(unittest.TestCase):
    @unittest.skip('disabled')
    def testNoParamsReturnString(self):
        @accepts()
        @returns(types.StringType)
        def f():
            return 'abcd'

        f()
        self.assertRaises(AssertionError, lambda: f('a'))

    @unittest.skip('disabled')
    def testParams(self):
        @accepts(types.StringType, types.IntType)
        @returns(types.NoneType)
        def f(var1, var2):
            return None

        f('abcd', 15)
        self.assertRaises(AssertionError, lambda: f('abcd', 'efgh'))
        self.assertRaises(AssertionError, lambda: f(15, 'efgh'))
        self.assertRaises(AssertionError, lambda: f())

In here you can see the syntax. Before each declaration of f(), I put an @accepts block and a returns block. The desired data types get passed into @accepts and @returns as though these two calls are function calls. As it happens, they are.

Additionally, I wanted to flag a parameter as optional. Not optional in that it can be omitted, but optional in that I could pass None instead of the declared type.

    def testMaybeParams(self):
        @accepts(types.StringType, Maybe(types.IntType))
        @returns(types.NoneType)
        def f(var1, var2):
            return None

        self.assertRaises(AssertionError, lambda: f('abcd', 'efgh'))
        f('abcd', None)
        f('abcd', 15)

def testOptions(self):
    @accepts(types.StringType, Options(types.NoneType, types.StringType, types.IntType))
    @returns(Options(types.NoneType, types.IntType))
    def f(var1, var2):
        if var1 == 'None':
            return None
        else:
            return 15

    f('abcd', 15)
    f('abcd', '15')
    f('abcd', None)
    self.assertRaises(TypeError, lambda: f('abcd', 5.5))
    self.assertRaises(TypeError, lambda: f(None, 'abcd'))

Note Maybe. Declarations will come soon, but I created Maybe as a class that accepts a data type as a single parameter. If either of the decorators see that the parameter type is Maybe, then it will allow None or the type passed in to Maybe in the corresponding parameter. And then, some time later, I created Options as a way to specify that a parameter can be any number of data types, including None.

So, finally, it is time to present the code itself. First, my two support classes. They are delightfully short.

class Maybe(object):
    def __init__(self, var_type):
        self.var_type = var_type

    def __repr__(self):
        return 'Maybe(%s)' % self.var_type

    def check(self, param):
        if param is None:
            return True
        if isinstance(param, self.var_type):
            return True
        return False

class Options(object):
    def __init__(self, *args):
        self.var_options = args

    def __repr__(self):
        return 'Options(%s)' % ','.join(map(repr, self.var_options))

    def check(self, param):
        for type_ in self.var_options:
            if isinstance(param, type_):
                return True
        return False

Both of these exist to give expressiveness to the type system, as above. In both cases, it became simplest to create a check operation that would actually run the check against a parameter and return whether the parameter passes.

The actual guts of the type checking happens in a series of standalone functions.

def format_param_mismatch(idx, arg_type, expected_type):
    return 'Incorrect type in parameter %d: got %s, expected %s' % (idx, arg_type, expected_type)

First, I have a function, format_param_mismatch to provide a good error message in the case of a parameter type mismatch. Note that the requirements are the index of the parameter, the argument type, and the expected argument type. I included the index because I found it necessary to say “Hey, a parameter doesn’t match and it is this parameter!”

def check_param(param, expected):
    if getattr(expected, 'check', None):
        return expected.check(param)
    return isinstance(param, expected)

This function is pretty simple. It only returns True or False. If the “expected” type has a check method, i.e., it is Maybe, Options, or some other supporting class that I have not created yet, get the result by calling the check method. Otherwise, just run isinstance.

def accepts(*var_types):
    def checked_function(f):
        def checker(*args, **kwargs):
            # if len(var_types) != len(args):
            mismatches = [
                (idx, type(arg), var_type)
                for (idx, var_type, arg) in zip(itertools.count(), var_types, args)
                if not check_param(arg, var_type)]
            if len(mismatches) != 0:
                raise TypeError('\n'.join(map(lambda x: format_param_mismatch(*x), mismatches)))
            return f(*args, **kwargs)
        return checker
    return checked_function

Decorators are complicated to code.

First, the decorator itself takes parameters. That is *var_types, and that is what allows the syntax above. That returns checked_function as a parameter.

Second, checked_function then gets applied to your original function, and the magic plumbing of the decorator replaces your binding with this new function that wraps your original function.

Third, the decorator needs to return a function, and that function will take your original function as a parameter. So, at compile time the original function and the types will all get linked together and your function binding will be replaced with the function that runs this check.

def returns(return_type):
    def checked_function(f):
        def checker(*args, **kwargs):
            val = f(*args, **kwargs)
            # assert isinstance(val, return_type), 'Incorrect return type: returned %s, expected %s' % (type(val), return_type)
            if not check_param(val, return_type):
                raise TypeError('Incorrect return type: return %s, expected %r' % (type(val), return_type))
            return val
        return checker
    return checked_function

returns works in exactly the same way as accepts, but applies the data type to the return value. With this, no matter how many return statements you have in your code, the actual returned value gets checked. Admittedly, it is getting checked after whatever side effects your function had, so if you return invalid data from your database update, your database has already been updated and potentially corrupted.

Limits

You have some limits still.

First, a part that I think is critical, is that you still will not know that you have a data type error until you actually exercise a code path that exhibits the data type error. On the other hand, at least you find out very quickly when you do so that your error is not a different kind of logic error.

Also, not quite obviously, I do not have a way for you to check arbitrary argument lists. Any part of *args that does not get captured by a named parameter will not be checked, and none of **kwargs will be checked. The decorator syntax is simply too limited to be able to describe such a check without the entire declaration becoming very cumbersome.

Generally, I would suggest avoiding arbitrary keyword arguments. It is not always a problem, but it does tend to lead to necessary but undocumented parameters. If you must use them, use them for cases where the arbitrary keyword arguments are just used to name optional arbitrary data fields, but that all actually necessary parameters are given an explicit name in your function declaration.

Overall, however, using these decorators liberally will help significantly with the task of tracking down problems that are ultimately data type errors. Additionally, the presence of the decorator helps document the API for the next person to come along, making explicit things that otherwise a programmer would have to dig into the code to find out.

If you are like me, then data type errors are the most common error you make, these decorators are going to be a big help.

http://i.creativecommons.org/l/by-nc-sa/3.0/88x31.png

Python Type Annotations by Savanni D'Gerinel is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. You can link to it, copy it, redistribute it, and modify it, but don't sell it or the modifications and don't take my name from it.

Haskell's Foreign Function Interface

Savanni D'Gerinel 5 Mar, 2013

As part of a research project, I did a tiny bit of work to figure out the tiniest part of Haskell’s Foreign Function Interface today. This is just the tip of the iceburg, but I thought I would show off the extremely simple demo that I worked out. I present both the C and Haskell versions of a program that feeds “abc” and “def” into the standard Unix crypt() function.

Source material comes from The FFI Cookbook and from Foreign.C.String

C, Venerable C

I did not do things in this order. However, for demonstration, we should start with the C operation. First, crypt() has the data prototype `char *crypt(const char *key, const char *salt);’. The code to call it takes a mere eleven lines:

#include <unistd.h>
#include <stdlib.h>
#include <stdio.h>

int main (int argc, char **argv) {
    char *res;
    res = crypt("abc", "def");
    printf("%s\n", res);

    return EXIT_SUCCESS;
}

Compile this with gcc -o crypt crypt.c -lcrypt. When I compile I get the warning crypt.c:7:9: warning: assignment makes pointer from integer without a cast [enabled by default], which I do not understand unless crypt() is not actually present in unistd.h. But, the code works correctly and as expected.

An important note: crypt() apparently returns a pointer to an internal data structure that you MUST NOT deallocate. I read comments in the man page about a GNU version of crypt(), crypt_r(), that is re-entrant, which reenforces my belief. If you want to keep the result of crypt(), be sure to copy it away, but otherwise do not allocate it. Running the program against valgrind indicates no leaked memory, which really solidifies this understanding for me.

Haskell FFI

The Haskell FFI was pretty easy to use, ultimately, and the cookbook really shows what we need to make this work. It takes a little bit more code, though, because we have to do the marshalling to and from C data types:

{-# LANGUAGE ForeignFunctionInterface #-}
module Main where

import Foreign.C
import Foreign.Ptr
import Foreign.Marshal.Alloc

foreign import ccall "crypt" c_crypt :: CString -> CString -> CString
crypt key str = do
    keyC <- newCString key
    strC <- newCString str
    res <- peekCAString $ c_crypt keyC strC
    free keyC
    free strC
    return res

main = do
    crypt "abc" "def" >>= putStrLn

foreign import ccall is some serious magic, but it pretty much amounts to the same as dlopen(). The catch is that you absolutely must declare the data type. So, here is the structure:

foreign import ccall "<optional header file> <function name>" <haskell binding> :: <type>

According to the FFI cookbook, prepending the name of the function with c_ is simply a standard.

What I do not know is whether c_crypt should have been declared as above, or as c_crypt :: CString -> CString -> IO CString. I am not yet clear on when a C call should be within the IO context and when it should not. I suspect that the compiler will not enforce anything and that it is up to me to make a (potentially unsafe) judgement call in the matter.

What’s the Point?

I want to do Haskell on the new Ubuntu software stack for tablets. This uses QML (though the rest of Ubuntu seems to still use GTK). I have found a QML library for Haskell, but it is out of date and does not work with the current QT 5. So, I can either recode it from scratch, or I can fully understand it and update it for the modern QT. In both cases, I need to learn the FFI. But I’ve needed to learn the FFI for a long time, anyway.

http://i.creativecommons.org/l/by-nc-sa/3.0/88x31.png

Haskell FFI by Savanni D'Gerinel is licensed under a Creative Commons Attribution-NonCommercial-SharAlike 3.0 Unported License. You can link to it, copy it, redistribute it, and modify it, but don't sell it or the modifications and don't take my name from it.

Processing SqlValues in Haskell

Savanni D'Gerinel 1 Jun, 2012

Let us assume that, whether you are just learning Haskell for the first time or have long since mastered Haskell the language, you suddenly need to hook up to a database for the first time.

Don’t laugh. Many applications work extremely well without a database.

My applications, however, almost always have a relational database somewhere in their guts. I work with a lot of relational data. I have to learn Database.HDBC early, and many of you will, too.

You should know that I have actually practiced Haskell several times over the last six years. I probably have about one year of active development scattered across that time. This time I was finally able to comprehend some concepts that really kicked my ass in previous attempts. In that time, libraries have changed and Database.HDBC has emerged as the primary interface to SQL.

Putting data into the database is really easy. Even getting data back out is easy, up to a point. You can find basic instructions on connecting for the first time, putting data into the database, and running queries, over in Chapter 21 of Real World Haskell. My article does not cover that. Instead, it covers what to do with the data that HDBC hands back to you.

I have not written much about it, but I am building up a general Fitness Application that covers workouts of the form “x repetions, y sets”. This application is currently distinct from my Cycling application, but I may merge them some time in the future.

The application

I am slowly working my way through the 100 Pushup Challenge. I have not purchased the Android app or anything and have been keeping logs of my progress in a normal text-based journal. But, I want to do some data analysis so that I can see trend lines and whether and how quickly I am making progress.

Several different workout types all follow this data structure. Whether I am doing the 100 Pushup Challenge or the 200 Situp Challenge or any of the other challenges, all of the workouts share identical data structures. I call this a SetRepWorkout (a workout consisting of multiple sets of repetitions with rest time in between) and modelled it as such:

data WorkoutType = Pushups | Situps deriving (Show, Read, Eq)
data SetRep = CSet { reps :: Integer, rest :: Integer, next :: SetRep }
            | FSet { reps Integer }
data SetRepWorkout = SetRepWorkout { uuid :: Data.UUID.UUID,
                                     date :: Data.Time.Calendar.Day,
                                     workout_type :: WorkoutType,
                                     description :: String,
                                     sets :: SetRep }

In theory, I could have represented things like so:

data SetRep = CSet { reps :: Integer, rest :: Maybe Integer }
            | FSet { reps :: Integer }
data SetRepWorkout = SetRepWorkout { uuid :: Data.UUID.UUID,
                                     date :: Data.Time.Calendar.Day,
                                     workout_type :: WorkoutType,
                                     description :: String,
                                     sets :: [SetRep] }

In fact, you will see later that I use this representation when storing the data. Arguably, this could have worked out better, but I used the original recursive definition. Even more, I probably could have left out the rest time as not informative.

Now, given the representation, we have to work on the ever-annoying functions to move the data into and out of the database.

When I started, I investigated the Persistent database architecture. I will likely investigate it again later, but I discovered that Persistent does not handle recursive data structures. When I told Persistent to save a SetRep structure, it saved the first CSet just fine, but everything in the next field ended up serialized into the next column in the database. I find this representation unacceptable as it forces all data queries to load the data into the objects, making ad-hoc queries impossible.

Changing the representation to non-recusive would have fixed the problem, but I would not have created nearly so much fodder to explain. And, ultimately, when doing complex data modelling we will all find a case in which the ORM breaks down and we must use SQL instead.

For the final application, though, I do not rule out the possibility of changing the representation. I may even drop the rest time by dropping the SetRep structure completely and making sets a list of Integers. If I make this change, Persistent becomes more viable, but still has some problems. I will discuss those down in my Conclusions.

The really easy way

Most documentation online covers running queries and retrieving data, but it rarely focuses on retrieval beyond the basic fetchRow or fetchAllRows functions. Once you have called these functions (or fetchAllRowsAL or fetchAllRowsMap), you have SqlValues, not values relevant to your domain model. Consider this:

conn <- connectSqlite3 "example.sqlite"
stmt <- prepare conn "SELECT * FROM Workout WHERE uuid=?"
execute stmt [toSql "9d4f98cd-63eb-4fad-8d9d-070f191b72db"]
rows <- fetchAllRowsAL' stmt

This looks distressingly procedural, but I could have rearranged it into a »= pipe to make it feel less so. When we examine rows, in this case we see:

[[("uuid",SqlByteString "9d4f98cd-63eb-4fad-8d9d-070f191b72db"),
  ("day",SqlByteString "2012-04-18"),
  ("workout_type",SqlByteString "Pushups"),
  ("description",SqlByteString "Week 3, Day 3, Column 2")]]

Now, remember that I called fetchAllRowsAL'. This will fetch all of the rows (hence the outermost list that has only a single element), and return each row as an association list of (column name, value). Further, this particular function fetches all data strictly, so I don’t have to worry about laziness issues when I finalize the stmt.

So, extract just the interesting workout with

let row = head rows

Then we need to convert from [(String, SqlValue)] to Workout:

let uuid = fromJust $ fromString $ fromSql $ fromJust $ lookup "uuid" row
    day = parseTime defaultTimeLocale "%Y-%m-%d" $ fromSql $ fromJust $ lookup "day" row
    workout_type = workouttypeFromSql $ fromJust $ lookup "workout_type" row
    description = fromSql $ fromJust $ lookup "description" row
SetRepWorkout uuid day workout_type description (FSet 0)

This works, but it catches no errors. fromJust will throw an exception if it gets Nothing as an input. fromString :: String -> Maybe UUID return Nothing if it receives a string in the wrong format. So, two classes of errors:

  • column not present
  • invalid data

I feel comfortable with allowing a column not present error to crash the application. The application expects a particular structure for the database and does not get it, so I consider it a programming error. But data in the database can get corrupted, and I do not want the program to crash because of an invalid data format. I would flag the error and do something with it in a higher level of the application.

A more robust (and almost as easy) way

It turns out that a more robust implementation is almost as easy to implement, but figuring that out took me seven hours of experimenting.

Notice that HDBC contains safeFromSql :: Convertible SqlValue a => SqlValue -> ConvertResult a. This leads naturally to the definition of Convertible and to ConvertResult.

The code does not include an instance for Convertible SqlValue UUID, but such an instance would help. If you had such an instance, you could say this:

let uuid = safefromSql $ fromJust $ lookup "uuid" row

(Note: I left a fromJust. lookup will return Nothing if the “uuid” column does not exist, and I consider this a programming error. So, I actually want fromJust there to blow up the program in that event.)

In fact, with proper instances, you could replace much of the above code like this:

let uuid = safeFromSql $ fromJust $ lookup "uuid" row
    day = safeFromSql $ fromJust $ lookup "day" row
    workout_type = safeFromSql $ fromJust $ lookup "workout_type" row
    description = safeFromSql $ fromJust $ lookup "description" row

Now, each of the fields will contain Left (ConvertError ...) instead of exploding when the code encounters invalid data for that field. You cannot feed that directly into a SetRepWorkout and unravelling would be tedious, but ConvertResult b is just a type alias for Either ConvertError b`, and so it functions in the Either Monad. So the entire function above can look like this:

rowToWorkout :: [(String, SqlValue)] -> ConvertResult SetRepWorkout
rowToWorkout row = do
    uuid <- safeFromSql $ fromJust $ lookup "uuid" row
    day <- safeFromSql $ fromJust $ lookup "day" row
    workout_type <- safeFromSql $ fromJust $ lookup "workout_type" row
    description <- safeFromSql $ fromJust $ lookup "description" row
    return $ SetRepWorkout uuid day workout_type desciption (FSet 0)

To make this work, you must ensure that each of the above conversions has a Convertible a b instance.

As it turns out, HDBC already provides Convertible SqlValue Day and Convertible SqlValue String. I simply had to provide Convertible SqlValue UUID and Convertible SqlValue WorkoutType. Here I provide those implementations, complete with the compiler flags necessary to even make them possible. I put the compile flags at the top of the file, but you could put them on the command line or in your Cabal file.

{-# LANGUAGE TypeSynonymInstances, FlexibleInstances, MultiParamTypeClasses #-}

import Data.ByteString.UTF8 a BUTF8 (toString)
import Data.UUID
import Data.Convertible.Base


instance Convertible SqlValue UUID where
    safeConvert (Sqlstring a) = case fromString a of
        Just b -> Right b
        Nothing -> Left $ ConvertError (show a) "SqlValue" "UUID" "Could not parse UUID"
    safeConvert (SqlByteString a) = safeConvert $ SqlString $ BUTF8.toString a
    safeConvert a = Left $ ConvertError (show a) "SqlValue" "UUID" "No conversion available"

instance Convertible SqlValue WorkoutType where
    safeConvert (SqlString a) = case a of
        "Pushups" -> Right Pushups
        "Situps" -> Right Situps
        _ -> Left $ ConvertError (show a) "SqlValue" "WorkoutType" "Unrecognized value"
    safeConvert (SqlByteString a) = safeConvert $ SqlString $ BUTF8.toString a
    safeConvert a = Left $ ConvertError (show a) "SqlValue" "WorkoutType" "No conversion available"

Given these implementations, and the rowToWorkout function, the selectWorkoutByID function is not bad, though it could probably use some more refinement.

selectWorkoutByID :: IConnection a => UUID -> a -> IO (Either String SetRepWorkout)
selectWorkoutByID w_id conn = do
    stmt <- prepare conn "SELECT * FROM Workout WHERE uuid=?"
    execute stmt [toSql $ show w_id]
    workout <- fetchAllRowsAL' stmt >>= mapM (return . rowToWorkout)
    return $ case length workout of
        0 -> Left "No workout found"
        1 -> case (head workout) of
            Right w -> Right w
            Left err -> Left $ show err
        _ -> Left "Multiple hits for a single workout Id"

Conclusions

You will need this information. For one reason or another, you will one day encounter a case in which an ORM cannot function, and then you will need to go to SQL. The needs of the database absolutely should not dictate your domain model. Persistent annoys me because, in an effort to stick with “Don’t Repeat Yourself”, it actually forces database knowledge into the domain model, which I feel is a violation of the Onion Architecture. Ultimately, though, this is a simple application and such a violation may not be so bad.

This case, however, I used for practice and for teaching. The program is so simple that I really can explore the implications of the Onion Architecture and force in a level of architectural purity. I would like you to closely examine the code and consider for yourself whether such a setup makes sense in your application. Perhaps, when I learn TemplateHaskell, I will even develop a set of templates to describe this code and allow the compiler to figure it out for me.

More concerning to me, though, is that I am not sure how Persistent can be made to follow even First Normal Form when dealing with a composite data structure without modelling the data in some very odd ways. Now, I would accept this if I were going to a key-value store or a document database, but I’m going to a relational database and I want the option of running queries based on proper relations. In almost every relational database I build, I demand at least Second Normal Form, frequently also Third Normal Form, and possibly also some insights from even higher normal forms.

Model <-> Database code is annoying, tedious, and error-prone. I recently read an article by Patrick Loi, Dependency Injection Ceremony, and really hooked on the image of the Devil on one shoulder, the Angel on the other shoulder, and the Angel saying “You should be listening to that other guy”. Read the article, as it is hilarous and informative. It also is relevant here.

Martin Fowler has some objections to ORM Hate, and he is right in that rolling your own ORM almost never makes sense. On the other hand, the ORM always (no, really, ALWAYS) violates a good Onion Architecture because it forces you to define database representation in the middle of your domain model. Ew. Gross.

Your solution is going to vary. For now, due to the extreme pain of a project at work in which I definitely did not properly isolate the domain model from the database, I will practice quite a lot with the purest architecture I can get. After I have experienced the benefits and drawbacks of this purity, I will revisit the compromises I want to make on my projects.

I have the full set of code for this fitness project (which will slowly grow into an application) on Gitlab. If you have comments or feedback, please email them directly to me at savanni@alyra.org.

http://i.creativecommons.org/l/by-nc-sa/3.0/88x31.png

Processing SqlValues in Haskell by Savanni D'Gerinel is licensed under a Creative Commons Attribution-NonCommercial-SharAlike 3.0 Unported License. You can link to it, copy it, redistribute it, and modify it, but don't sell it or the modifications and don't take my name from it.

Immutable Lists and Function Pointers in C

30 Apr, 2011

Lists, lists, lists…

I have written a lot of Clojure code recently. With Clojure inheriting from Lisp, and List standing for List Processor, I have thought quite a lot about lists recently.

I have also written a huge amount of C code recently. I took a piece of code that I wrote five years ago, noted that I would need to do a lot of work to add the new feature, so I decided to overhaul it and rewrite it The Correct Way. In this case, that means using MySQL’s prepared statement API instead of string hackery. Trust me when I say that string hackery sucks, especially since I know about SQL injection and had to protect against that, too.

I found that I had a lot of isomorphisms in the code, so like the abstraction astronaut that I aspire to become, I started templating the code. No, not the horrific C++ templates, but something more Lispy: creating functions that I could pass around easily. On and on I went, and then I started struggling with parameters to all of these various functions that I needed in order to build up and populate my queries.

“Lists!”, I said!

“No!”, said C.

C said it loudly, so I ultimately used a list data structure that a co-worker wrote for the internal library we use. It weighs a lot and does not allow for certain forms of elegance that I wanted.

So, independently, I played with the concept quite a lot and ultimately wrote my own list library.

Ah. Yes. Another one. And here I’d sworn to never do it again.

List Representations

I have a list of data. I need to store it, pass it around, maybe even modify it. I have some options.

First… a list object. I have implemented several of them over the years, starting all the way back in my data structures classes. Singly-linked lists, doubly-linked lists, circular lists, and on and on. Then later I even did some optimizations and I took up the Standard Template Library, because using it was less painful than implementing Yet Another List.

I want a list object when I plan to work with a lot of data, and I really want the object if the list itself will change a lot. But, list objects fall outside of this because they end up so thoroughly customized and optimized. Further, they do not lend themselves well to language-level idiom.n C The rest of this article will explain what I mean by that by showing what I have tried to accomplish.

Given that, I can enumerate three different common list representations:

  • counted lists
  • terminated lists
  • vararg lists

You know about these. You have a terminated list every time you declare a string:

const char my_str[] = "this list has a zero character terminating it"

However, your list could look more like this:

const char *my_strings[5];
my_strings[0] = "String 0";
my_strings[1] = "String 1";
my_strings[2] = "String 2";
my_strings[3] = NULL;

In this case, the list has a NULL pointer at the end of it. If you know this, you can iterate over the list like so:

ssize_t i;
for (i = 0; my_strings[i] != NULL; i++)
    printf("[%d] %s\n", i, my_strings[i];

But, what if you want to embed a NULL in the middle of the list?

const char *name[3];
name[0] = "Savanni";
name[1] = NULL;
name[2] = "D'Gerinel";

At this point, you have to count the string. You know that the string contains three values and that some of them may contain NULL. With that in mind, your downstream code will need to understand that it may encounter NULL pointers, but you will also need to explicitely tell it how many values to look at.

You will also encounter this if you deal with binary data. Technically, all data in a computer is binary, but in this case, I mean a string of data in which each cell could contain any value, including 0.

size_t img_size = 1024;
const unsigned char *img = <1024 bytes of an image>

Now for the really big huge catch.

For one thing, I have never seen any functions for list manipulation, except sort(), in glibc.

For another, functions written for one list representation will not work for another.

Let’s table this for a short time and explore a use case near and dear to my heart.

Anonymous Lists

In Clojure, nobody bats an eyelash if I do this (assuming print-names has already been defined):

(print-names "Savanni" "D'Gerinel")

In C, I have a few options:

print_name("Savanni", "D'Gerinel");
print_name((char *[]){"Savanni", "D'Gerinel"});

char * lst[] = {"Savanni", "D'Gerinel"};
print_name(lst);

Assuming I got the code right, print_name((char *[])…) and print_name(lst) are the same. This shows an example of declaring a list inline, anonymously. I can use this trick in other cases, too, such as declaring a structure. For instance:

struct timespec rem;
nanosleep(&(const struct timespec){5, 0},
          &rem);

In this case, I anonymously declared the timespec that nanosleep needs. Sometimes declaring a data structure that exists solely to become a parameter to a function is a total pain, so being able to do this anonymously adds elegance to my code. Nobody ever taught me this, nor have I seen it in example code or tutorials anywhere. I just discovered it one day digging through some of the code at my office.

DANGER: From here on out we leave behind the realms of Type Safety. We begin to play firmly in the realm of void *, and static type-checking becomes a dream of the past. You have been warned.

Returning to print_name, though, I have a problem. I did not create a terminate list, nor did I actually tell print_name how many elements the list contains. This creates disasters in C, so I have to do one or the other. Since I want the most generalized code I can possibly create, I have opted for a counted list:

print_name(2, (int []){"Savanni", "D'Gerinel"});

The other way I want to create such a list would be like so:

print_name(2, "Savanni", "D'Gerinel");

The point is to be able to call print_name() with as many different varieties of lists as possible. The problem, though, is that the two examples here require separate implementations, one accepting a counted list, and one accepting C’s vararg system.

void print_name_cnt (ssize_t cnt, int *lst);
void print_name_var (ssize_t cnt, ...);

While this would be just fine for one function, I was doing stuff like this for many different functions, and it all became very unweildy very quickly.

Enter my solution

The solution lay not in making the functions flexible, but in flexibly constructing the lists. This is something object oriented programmers probably knew about. I needed a single coherent list implementation and functions for building that list. Along with such, I needed a library of list operations, because what is the point of having such a list abstraction unless I have functions to go along with it.

Enter, my Immutable Lists API.

Through some trial and error, I developed the basic implementation:

typedef struct im_list_s {
  size_t cnt;
  const void *data[0];
} im_list_t;

In my implementation, I assume that the size of the list Will Not Change. The “im_” in “im_list_s” stands for “immutable”. Further, the entire list gets allocated with a single malloc() operation, so I can use free() to deallocate it no matter which list operation I use to call it.

Note: I do not provide ironclad guarantees of immutability. The length of the original list cannot change, but you could always hack cnt. You could change the pointers that the list points to. Be sure you understand what you are doing in that case, because I cannot protect you.

Now, the basic construction functions:

im_list_t * im_list_create_vararg (ssize_t cnt, va_list vap);
im_list_t * im_list_create_cnt (ssize_t cnt, ...);
im_list_t * im_list_create_term (void *term, ...);

Given these three functions, I can build a list in three different ways:

im_list_create_cnt(2, "Savanni", "D'Gerinel");
im_list_create_vararg (2, vap); // in this case, vap was built by some other function accepting a vararg
im_list_create_term (NULL, "Savanni", "D'Gerinel", NULL);

Given these three, I need only a single implementation for print_name:

void print_name (im_list_t *lst);

print_name(im_list_create_cnt(2, "Savanni", "D'Gerinel"));

There’s Always A Catch

This is all well and good, but what about Memory Management?

These lists that I am creating cannot be made on the stack. I have no choice but to allocate some heap space for them because I never know, until the list gets created at run time, how much space the list will need. As always, I must deallocate any allocated heap space. If I delegate that to the calling function, though, I lose all of the elegance of inline construction. I may as well go with a heavier solution.

im_list_t *lst = im_list_create_cnt(2, "Savanni", "D'Gerinel");
print_name(lst);
free(lst);

No good.

So, in my library, I decided that lists would be allocated, but managed in static variables.

im_list_create_cnt keeps a static variable internally, builds the list into that static, and returns it. If that variable already has a value, it will deallocate the old value first. This means that you cannot be totally haphazard in when you create lists – you must be done using one list before you can create another – but it does allow you the flexibility of creating a list anonymously as I wanted to do without leaking memory.

However, sometimes I do want to keep that list, so I created im_list_dup to duplicate any list. The original list remains under the control of im_list_create_cnt, but I now have a new copy that I can safely use as I desire.

im_list_t *lst = im_list_dup(im_list_create_cnt(2, "Savanni", "D'Gerinel"));
print_name(im_list_create(2, "Jim", "Thorpe"));
print_name(lst);
free(lst);

Conclusion

I am not satisfied with such a small number of operations. These operations form the basis for a list, and they certainly provide the most interesting part of my list solution, but they are not enough to be fully useful. In my code, I have already implemented a small set of additional helpful operations, and I will add more as I use the list in real projects. This article, however, describes how we can actually accomplish a little elegance in C, extending the language without going all the way to macro hackery.

Be sure to examine all of the code in more detail. I have already included an example to show my use cases and full documentation of the code proper.

Have fun, and tell me about other awesomely elegant hacks! C could really use elegant dictionary and set data structures, giving it a nice rounding of the structures that make Clojure so powerful.

http://i.creativecommons.org/l/by-nc-sa/3.0/88x31.png

Immutable Lists and Function Parameters in C by Savanni D'Gerinel is licensed under a Creative Commons Attribution-NonCommercial-SharAlike 3.0 Unported License. You can link to it, copy it, redistribute it, and modify it, but don't sell it or the modifications and don't take my name from it.


Dreamer, Shaper, Seeker, Maker