# To Dream of Magick

Savanni D'Gerinel 17 Jul, 2014

# The Kinds of Error Reports

• Exceptions
• Either
• ErrorT or EitherT

# Parsing a File

• parseImage :: ByteString -> Either ParseError (Image PixelRGB8)
• readFile :: FilePath -> IO ByteString
• readImageFile fname = readFile fname »= return . parseImage :: FilePath -> IO (Either ParseError (Image PixelRGB8))

# Exceptions

• throw :: Exception e => e -> a
• throwIO :: Exception e => e -> IO a
• catch :: Exception e => IO a -> (e -> IO a) -> IO a
• handle :: Exception e => (e -> IO a) -> IO a -> IO a
• try :: Exception e => IO a -> IO (Either e a)
• throwTo :: Exception e => ThreadId -> e -> IO ()
• error :: String -> a

# Exceptions

\footnotesize

readFileExc :: FilePath -> IO ByteString
where
silenceENoEnt :: IOException -> IO ByteString
silenceENoEnt exc | isDoesNotExistError exc = return empty
| otherwise = throw exc

readImageFileExc :: FilePath -> IO Image
either throw return (parseImage bs)


\normalsize

# Either

\footnotesize

data ReadImageError = ParseError ParseError | ReadError IOException

readFileEither :: FilePath -> IO (Either IOException ByteString)
readFileEither fn = try (readFile fn) >>= return . either silenceENoEnt Right
where
silenceENoEnt :: IOException -> Either IOException ByteString
silenceENoEnt exc | isDoesNotExistError exc = Right empty
| otherwise = Left exc

return $case mBs of Left err -> Left (ReadError err) Right bs -> either (Left . ParseError) Right (parseImage bs)  \normalsize # ErrorT and EitherT • newtype ErrorT e m a = ErrorT { runErrorT :: m (Either e a) } • newtype EitherT e m a = EitherT { runEitherT :: m (Either e a) } # ErrorT and EitherT • newtype ErrorT e m a = ErrorT { runErrorT :: m (Either e a) } • newtype EitherT e m a = EitherT { runEitherT :: m (Either e a) } • EitherT :: m (Either e a) -> EitherT e m a • runEitherT :: EitherT e m a -> m (Either e a) # EitherT • catch :: Exception e => IO a -> (e -> IO a) -> IO a • catch :: Exception e => IO (Either e a) -> (e -> IO (Either e’ a)) -> IO (Either e’ a) • catchT :: Monad m => EitherT e m a -> (e -> EitherT e’ m a) -> (EitherT e’ m a) # EitherT • handle :: Exception e => (e -> IO a) -> IO a -> IO a • handleT :: Monad m => (e -> EitherT e’ m a) -> EitherT e m a -> (EitherT e’ m a) # EitherT • try :: Exception e => IO a -> IO (Either e a) • tryIO :: MonadIO m => IO a -> EitherT IOException m a • EitherT . try :: Exception e => IO a -> EitherT e IO a # EitherT \footnotesize catchT :: forall e e' m a. Monad m => EitherT e m a -> (e -> EitherT e' m a) -> EitherT e' m a catchT action handler = EitherT$ do
res <- runEitherT action :: m (Either e a)
case res of
Right val -> return (Right val)
Left err -> runEitherT (handler err)


\normalsize

\footnotesize

class Monad m => MonadError e m | m -> e where
throwError :: e -> m a
catchError :: m a -> (e -> m a) -> m a


\normalsize

\footnotesize

data ReadImageError = ParseError ParseError | ReadError IOException

return $case mBs of Left err -> Left (ReadError err) Right bs -> either (Left . ParseError) Right (parseImage bs)  \normalsize # Revisiting readImageFileEither \footnotesize data ReadImageError = ParseError ParseError | ReadError IOException readImageFileEither :: FilePath -> EitherT ReadImageError IO Image readImageFileEither fn =  \normalsize # Revisiting readImageFileEither \footnotesize readImageFileEither' :: FilePath -> EitherT ReadImageError IO Image readImageFileEither' fn = do bs <- readFileEither' fn :: EitherT IOException IO ByteString  \normalsize # Revisiting readImageFileEither \footnotesize readImageFileEither' :: FilePath -> EitherT ReadImageError IO Image readImageFileEither' fn = do bs <- handleT (throwError . ReadError) (readFileEither' fn) :: EitherT ReadImageError IO ByteString  \normalsize # Revisiting readImageFileEither \footnotesize readImageFileEither' :: FilePath -> EitherT ReadImageError IO Image readImageFileEither' fn = do bs <- handleT (throwError . ReadError) (readFileEither' fn) case parseImage bs of Right val -> return val Left exc -> throwError (ParseError exc)  \normalsize # Revisiting readImageFileEither \footnotesize readImageFileEither' :: FilePath -> EitherT ReadImageError IO Image readImageFileEither' fn = do bs <- handleT (throwError . ReadError) (readFileEither' fn) either (throwError . ParseError) return (parseImage bs)  \normalsize # Revisiting readImageFileEither \footnotesize readFileExc :: FilePath -> IO ByteString readFileExc = handle silenceENoEnt . readFile where silenceENoEnt :: IOException -> IO ByteString silenceENoEnt exc | isDoesNotExistError exc = return empty | otherwise = throw exc readFileEither' :: FilePath -> EitherT IOException IO ByteString readFileEither' = handleT silenceENoEnt . tryIO . readFile where silenceENoEnt :: IOException -> EitherT IOException IO ByteString silenceENoEnt exc | isDoesNotExistError exc = return empty | otherwise = throwError exc  \normalsize # Revisiting readImageFileEither \footnotesize silenceENoEnt' :: (MonadError IOException m, MonadIO m) => IOException -> m ByteString silenceENoEnt' exc | isDoesNotExistError exc = return empty | otherwise = throwError exc readFileExc :: FilePath -> IO ByteString readFileExc = handle silenceENoEnt' . readFile readFileEither' :: FilePath -> EitherT IOException IO ByteString readFileEither' = handleT silenceENoEnt' . tryIO . readFile  \normalsize # Building an Application Stack \footnotesize data DiskStore = DiskStore { root :: FilePath } newtype DiskStoreM a = DSM { uDSM :: ReaderT DiskStore (EitherT DataStoreError IO) a } deriving ( Functor, Applicative, Monad, MonadIO, , MonadReader DiskStore, MonadError DataStoreError )  \normalsize # Put an Object In \footnotesize putObject :: DataObject obj => Path -> obj -> Maybe ObjVersion -> DiskStoreM ObjVersion putObject path obj mVer = do DiskStore{..} <- ask let fsPath = root </> "objects" </> T.unpack (T.intercalate "/" elems) createDirectoryIfMissing True (dropFileName fsPath) BS.writeFile fsPath (content obj) return (ObjVersion T.empty)  \normalsize • createDirectoryIfMissing :: Bool -> FilePath -> IO () • writeFile :: FilePath -> ByteString -> IO () # hoistIO (IOException -> EitherT DataStoreError IO a) -> IO a -> DiskStoreM a # hoistIO \footnotesize hoistIO :: (IOException -> EitherT DataStoreError IO a) -> IO a -> EitherT IOException IO a hoistIO handler action = tryIO action  \normalsize # hoistIO \footnotesize hoistIO :: (IOException -> EitherT DataStoreError IO a) -> IO a -> EitherT DataStoreError IO a hoistIO handler action = handleT handler (tryIO action)  \normalsize # hoistIO \footnotesize hoistIO :: (IOException -> EitherT DataStoreError IO a) -> IO a -> ReaderT DiskStore (EitherT DataStoreError IO) a hoistIO handler action = ReaderT (\_ -> handleT handler (tryIO action)) hoistIO handler action = lift (handleT (throwError . handler) (tryIO action))  \normalsize # hoistIO \footnotesize hoistIO :: (IOException -> EitherT DataStoreError IO a) -> IO a -> DiskStoreM a hoistIO handler action = DSM (lift (handleT handler (tryIO action)))  \normalsize # Back to adding the object \footnotesize putObject :: DataObject obj => Path -> obj -> Maybe ObjVersion -> DiskStoreM ObjVersion putObject (Path elems) obj mVer = do DiskStore{..} <- ask let fsPath = root </> "objects" </> T.unpack (T.intercalate "/" elems) hoistIO (throwError . trIOExc)$ do
createDirectoryIfMissing True (dropFileName fsPath)
BS.writeFile fsPath (content obj)
return (ObjVersion T.empty)


\normalsize

# Catching and handling inside the monad

\footnotesize

putObject :: DataObject obj => Path
-> obj
-> Maybe ObjVersion
-> DiskStoreM ObjVersion
putObject (Path elems) obj mVer = do
let fsPath = root </> "objects" </> T.unpack (T.intercalate "/" elems)
res <- liftIO $do createDirectoryIfMissing True (dropFileName fsPath) BS.writeFile fsPath (content obj) return (ObjVersion T.empty)  \normalsize # Links ## Progress on Illuminance Savanni D'Gerinel 16 May, 2014 I started the Illuminance project back in February of this year. Immediately after my last contract ended, I started working on the code in order to make this application real. Three months in, I both exault in everything I have learned, and despair at how much remains to be done. This looked like such a simple problem, yet in its solution I have pursued so much new information that I constantly swim underneath a bare understanding of all that I have learned. Where am I now? # User Interface I have a very basic user interface that allows me to load images, see their previews, see error messages that occur during load, and run a “naive” HDR calculation. Error messages get displayed in the location where an image preview would normally be. While this is not necessarily elegant, it allow me to display an error that is specific to one image, since a lot of things happen in parallel and in other threads. Generally, no expensive operation occurs in the main gui thread, so the gui does not ever actually hang. I have not been able to test this exhaustively and I am sure that I will find places where I am wrong, but this does certainly seem to be the case. Images load in millisecond time, even very large images. They also load with reasonable space usage. I had some great success with creating my own image codec library and representation, but then I discovered that a combination of JuicyPixels and Repa solve the problem better by virtue of being more complete. # On the HDR Calculation I have spent a vast amount of time learning math in the hopes that I could figure out how to set up the HDR calculation. Ultimately, I read some Matlab code that somebody else wrote to figure out how to set up all of the terms for the problem. I put all of that together, set up the matrices to solve, called into the LaPack sgelsd function to actually run the calculation… and got the wrong answer. I am stumped, however there is a lot that I can play with still. I have before solved problems where the wrong answer cannot be reverse engineered and where fixing the problem can only be done by carefully examining the inputs. I imagine that is what will be required here. I do have what I call a naive calculation in the application. This calculation scales every pixel by the associated exposure value, and then averages the result. The end result is weird in the visual artifacts and doesn’t actually give me a high dynamic range anything. While I could improve this, all I wanted to do was to get something on the screen. I have since focused all of my efforts on getting the real calculation to work. # What needs to be done Oh so much. • Get the HDR calculation actually working • Be able to save and load HDR files • Tonemapping to LDR • Making a nice GUI sigh. In other words, what is done so far only barely scratches the surface of a complete application. # Diversion In any event, I am diverting to a different task. Problems on Linux have left all of the current photo managers unusable, and have left me unable to even submit a bug report on the one that I’ve been using for several years, so I am scrambling to hack one out on my own. I have no idea whether this is going to work, because while I can handle most everything, there are GUI elements that I don’t know how to handle yet. We shall see. This does, however, put Illuminance on hold, but all of my photography is really on hold due to these bugs. • Shotwell – Ug. Doesn’t handle XMP files. Won’t update tags on a RAW file of any kind. Instead, squirrels tags away in some custom database in a hidden folder so that no other program can see them. This problem burned me very badly once, so I refuse to use the program again unles full XMP support gets added. • Digikam – An 8 million pound gorilla (requiring a full installation of KDE) that I’ve been using with good success for a couple of years. Writes all tags for RAW files into an XMP sidecar, so I haven’t lost a tag in years. Unfortunately, it is also crashing randomly with either the Abort signal (and no reportable stack trace) or a Segmentation Fault (which might be reportable as soon as I get symbols installed). I think the problem lies somewhere in the KDE libs, not in Digikam itself. Either way, the application is unusable and I have no way forward to fixing it. So, maybe I can produce something a little lightweight. I’d been thinking of trying, anyway, but I had not intended to do so until Illuminance was done. Hopefully I will get results faster since this is just a GUI and data management problem. ## Python Type Annotations Savanni D'Gerinel 3 Jun, 2013 I have to admit, 40% of the reason I use Haskell lies in static type checking. Another 40% lies in strong, ironclad immutability. Only about 20% of my love of Haskell has anything to do with language expressiveness. Make no mistake, the language has expressiveness out the wazoo, but I truly love static typing and immutability. One day recently turned into a disaster. One problem lead to another in a task that should have been trivial and instead involved four hours of beating my head against the desk. Part of the problem was that my system under test had only acceptance tests, executed only with all of standard out and standard error shunted away to /dev/null. Either way, after I got my changes made, I decided to step out of the office to think. Python has expressiveness. It has neither static type checking or immutable values, and the language developers get really holy about this. I have no interest in arguing with them, as I believe they have decided to abandon all safety and flood the world with code that might blow up in production because maybe in some corner case that a STATIC TYPE CHECKER could have detected at compile time, they inadverdently passed an Integer to a function that expected a String. So, I will not change Python the language, but I do want to make things nicer. Even though my solution will not get checked at compile time, it can certainly make debugging easier when a function crashes immediately due to a blown data type assertion, rather than propogating that incorrect data type potentially quite some distance. I have put the code in a Repository. You can get this particular file at type_checking.py hg clone https://gitlab.com/savannidgerinel/python-tools  # The gruesome way You could do it like this. I have done this. def authenticate (username, password): assert isinstance(username, types.StringType) assert isinstance(password, types.StringType) ... do a bunch of authentication stuff and talking to databases and many things that belong in the IO monad ... assert isinstance(ret, UserClass) return ret  Ew. It will work in a pinch… but ew. This gets especially bad if I have several places from which I can return from the function. Yes, it improves the readability of the input parameters, but it does little for the return parameter beyond making postconditions. # Slightly better Assertions are things that “should never happen in code”. So, technically, an AssertionError is actually not a good thing to throw in the case of a type error. Python actually provides TypeError to indicate that a data type error has occurred. That is convenient. So, instead of calling assert, let’s create a function that will do the job and raise a better exception. And, let’s build in making the parameter optional. def typecheck (param, type_, optional=False): if not optional and param is None: raise TypeError('None is not permitted for this value') if param is not None and not isinstance(param, type_): raise TypeError('Expected type %r, got %r' % (type_, type(param)))  With this, your above code would look like this: def authenticate (username, password): typecheck(username, types.StringType) typecheck(password, types.StringType) ... do a bunch of authentication stuff and talking to databases and many things that belong in the IO monad ... typecheck(ret, UserClass) return ret  This doesn’t improve the code much, but it does make for more descriptive error messages. I’m rather liking this improvement. But I can do better. # Decorative rescue I once read this round of pejoratives about static type users, and I wondered for a while what that meant. I looked things up, found a few references to using decorators to “decorate” type checks on functions, but I did not like the solutions. Maybe they were good solutions, but I wanted to solve it myself. Also, the typecheck module for Python appears to be almost seven years dead. So I introduce a some code that I wrote in an hour yesterday. First, I played a bit with the syntax, and then I put the syntax into a unit test. You do test your code, don’t you? class TestTypeChecker(unittest.TestCase): @unittest.skip('disabled') def testNoParamsReturnString(self): @accepts() @returns(types.StringType) def f(): return 'abcd' f() self.assertRaises(AssertionError, lambda: f('a')) @unittest.skip('disabled') def testParams(self): @accepts(types.StringType, types.IntType) @returns(types.NoneType) def f(var1, var2): return None f('abcd', 15) self.assertRaises(AssertionError, lambda: f('abcd', 'efgh')) self.assertRaises(AssertionError, lambda: f(15, 'efgh')) self.assertRaises(AssertionError, lambda: f())  In here you can see the syntax. Before each declaration of f(), I put an @accepts block and a returns block. The desired data types get passed into @accepts and @returns as though these two calls are function calls. As it happens, they are. Additionally, I wanted to flag a parameter as optional. Not optional in that it can be omitted, but optional in that I could pass None instead of the declared type.  def testMaybeParams(self): @accepts(types.StringType, Maybe(types.IntType)) @returns(types.NoneType) def f(var1, var2): return None self.assertRaises(AssertionError, lambda: f('abcd', 'efgh')) f('abcd', None) f('abcd', 15) def testOptions(self): @accepts(types.StringType, Options(types.NoneType, types.StringType, types.IntType)) @returns(Options(types.NoneType, types.IntType)) def f(var1, var2): if var1 == 'None': return None else: return 15 f('abcd', 15) f('abcd', '15') f('abcd', None) self.assertRaises(TypeError, lambda: f('abcd', 5.5)) self.assertRaises(TypeError, lambda: f(None, 'abcd'))  Note Maybe. Declarations will come soon, but I created Maybe as a class that accepts a data type as a single parameter. If either of the decorators see that the parameter type is Maybe, then it will allow None or the type passed in to Maybe in the corresponding parameter. And then, some time later, I created Options as a way to specify that a parameter can be any number of data types, including None. So, finally, it is time to present the code itself. First, my two support classes. They are delightfully short. class Maybe(object): def __init__(self, var_type): self.var_type = var_type def __repr__(self): return 'Maybe(%s)' % self.var_type def check(self, param): if param is None: return True if isinstance(param, self.var_type): return True return False class Options(object): def __init__(self, *args): self.var_options = args def __repr__(self): return 'Options(%s)' % ','.join(map(repr, self.var_options)) def check(self, param): for type_ in self.var_options: if isinstance(param, type_): return True return False  Both of these exist to give expressiveness to the type system, as above. In both cases, it became simplest to create a check operation that would actually run the check against a parameter and return whether the parameter passes. The actual guts of the type checking happens in a series of standalone functions. def format_param_mismatch(idx, arg_type, expected_type): return 'Incorrect type in parameter %d: got %s, expected %s' % (idx, arg_type, expected_type)  First, I have a function, format_param_mismatch to provide a good error message in the case of a parameter type mismatch. Note that the requirements are the index of the parameter, the argument type, and the expected argument type. I included the index because I found it necessary to say “Hey, a parameter doesn’t match and it is this parameter!” def check_param(param, expected): if getattr(expected, 'check', None): return expected.check(param) return isinstance(param, expected)  This function is pretty simple. It only returns True or False. If the “expected” type has a check method, i.e., it is Maybe, Options, or some other supporting class that I have not created yet, get the result by calling the check method. Otherwise, just run isinstance. def accepts(*var_types): def checked_function(f): def checker(*args, **kwargs): # if len(var_types) != len(args): mismatches = [ (idx, type(arg), var_type) for (idx, var_type, arg) in zip(itertools.count(), var_types, args) if not check_param(arg, var_type)] if len(mismatches) != 0: raise TypeError('\n'.join(map(lambda x: format_param_mismatch(*x), mismatches))) return f(*args, **kwargs) return checker return checked_function  Decorators are complicated to code. First, the decorator itself takes parameters. That is *var_types, and that is what allows the syntax above. That returns checked_function as a parameter. Second, checked_function then gets applied to your original function, and the magic plumbing of the decorator replaces your binding with this new function that wraps your original function. Third, the decorator needs to return a function, and that function will take your original function as a parameter. So, at compile time the original function and the types will all get linked together and your function binding will be replaced with the function that runs this check. def returns(return_type): def checked_function(f): def checker(*args, **kwargs): val = f(*args, **kwargs) # assert isinstance(val, return_type), 'Incorrect return type: returned %s, expected %s' % (type(val), return_type) if not check_param(val, return_type): raise TypeError('Incorrect return type: return %s, expected %r' % (type(val), return_type)) return val return checker return checked_function  returns works in exactly the same way as accepts, but applies the data type to the return value. With this, no matter how many return statements you have in your code, the actual returned value gets checked. Admittedly, it is getting checked after whatever side effects your function had, so if you return invalid data from your database update, your database has already been updated and potentially corrupted. # Limits You have some limits still. First, a part that I think is critical, is that you still will not know that you have a data type error until you actually exercise a code path that exhibits the data type error. On the other hand, at least you find out very quickly when you do so that your error is not a different kind of logic error. Also, not quite obviously, I do not have a way for you to check arbitrary argument lists. Any part of *args that does not get captured by a named parameter will not be checked, and none of **kwargs will be checked. The decorator syntax is simply too limited to be able to describe such a check without the entire declaration becoming very cumbersome. Generally, I would suggest avoiding arbitrary keyword arguments. It is not always a problem, but it does tend to lead to necessary but undocumented parameters. If you must use them, use them for cases where the arbitrary keyword arguments are just used to name optional arbitrary data fields, but that all actually necessary parameters are given an explicit name in your function declaration. Overall, however, using these decorators liberally will help significantly with the task of tracking down problems that are ultimately data type errors. Additionally, the presence of the decorator helps document the API for the next person to come along, making explicit things that otherwise a programmer would have to dig into the code to find out. If you are like me, then data type errors are the most common error you make, these decorators are going to be a big help. Python Type Annotations by Savanni D'Gerinel is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 3.0 Unported License. You can link to it, copy it, redistribute it, and modify it, but don't sell it or the modifications and don't take my name from it. ## Haskell's Foreign Function Interface Savanni D'Gerinel 5 Mar, 2013 As part of a research project, I did a tiny bit of work to figure out the tiniest part of Haskell’s Foreign Function Interface today. This is just the tip of the iceburg, but I thought I would show off the extremely simple demo that I worked out. I present both the C and Haskell versions of a program that feeds “abc” and “def” into the standard Unix crypt() function. Source material comes from The FFI Cookbook and from Foreign.C.String # C, Venerable C I did not do things in this order. However, for demonstration, we should start with the C operation. First, crypt() has the data prototype char *crypt(const char *key, const char *salt);’. The code to call it takes a mere eleven lines: #include <unistd.h> #include <stdlib.h> #include <stdio.h> int main (int argc, char **argv) { char *res; res = crypt("abc", "def"); printf("%s\n", res); return EXIT_SUCCESS; }  Compile this with gcc -o crypt crypt.c -lcrypt. When I compile I get the warning crypt.c:7:9: warning: assignment makes pointer from integer without a cast [enabled by default], which I do not understand unless crypt() is not actually present in unistd.h. But, the code works correctly and as expected. An important note: crypt() apparently returns a pointer to an internal data structure that you MUST NOT deallocate. I read comments in the man page about a GNU version of crypt(), crypt_r(), that is re-entrant, which reenforces my belief. If you want to keep the result of crypt(), be sure to copy it away, but otherwise do not allocate it. Running the program against valgrind indicates no leaked memory, which really solidifies this understanding for me. # Haskell FFI The Haskell FFI was pretty easy to use, ultimately, and the cookbook really shows what we need to make this work. It takes a little bit more code, though, because we have to do the marshalling to and from C data types: {-# LANGUAGE ForeignFunctionInterface #-} module Main where import Foreign.C import Foreign.Ptr import Foreign.Marshal.Alloc foreign import ccall "crypt" c_crypt :: CString -> CString -> CString crypt key str = do keyC <- newCString key strC <- newCString str res <- peekCAString$ c_crypt keyC strC
free keyC
free strC
return res

main = do
crypt "abc" "def" >>= putStrLn


foreign import ccall is some serious magic, but it pretty much amounts to the same as dlopen(). The catch is that you absolutely must declare the data type. So, here is the structure:

foreign import ccall "<optional header file> <function name>" <haskell binding> :: <type>


According to the FFI cookbook, prepending the name of the function with c_ is simply a standard.

What I do not know is whether c_crypt should have been declared as above, or as c_crypt :: CString -> CString -> IO CString. I am not yet clear on when a C call should be within the IO context and when it should not. I suspect that the compiler will not enforce anything and that it is up to me to make a (potentially unsafe) judgement call in the matter.

# What’s the Point?

I want to do Haskell on the new Ubuntu software stack for tablets. This uses QML (though the rest of Ubuntu seems to still use GTK). I have found a QML library for Haskell, but it is out of date and does not work with the current QT 5. So, I can either recode it from scratch, or I can fully understand it and update it for the modern QT. In both cases, I need to learn the FFI. But I’ve needed to learn the FFI for a long time, anyway.

Savanni D'Gerinel 1 Jun, 2012

Let us assume that, whether you are just learning Haskell for the first time or have long since mastered Haskell the language, you suddenly need to hook up to a database for the first time.

Don’t laugh. Many applications work extremely well without a database.

My applications, however, almost always have a relational database somewhere in their guts. I work with a lot of relational data. I have to learn Database.HDBC early, and many of you will, too.

You should know that I have actually practiced Haskell several times over the last six years. I probably have about one year of active development scattered across that time. This time I was finally able to comprehend some concepts that really kicked my ass in previous attempts. In that time, libraries have changed and Database.HDBC has emerged as the primary interface to SQL.

Putting data into the database is really easy. Even getting data back out is easy, up to a point. You can find basic instructions on connecting for the first time, putting data into the database, and running queries, over in Chapter 21 of Real World Haskell. My article does not cover that. Instead, it covers what to do with the data that HDBC hands back to you.

I have not written much about it, but I am building up a general Fitness Application that covers workouts of the form “x repetions, y sets”. This application is currently distinct from my Cycling application, but I may merge them some time in the future.

# The application

I am slowly working my way through the 100 Pushup Challenge. I have not purchased the Android app or anything and have been keeping logs of my progress in a normal text-based journal. But, I want to do some data analysis so that I can see trend lines and whether and how quickly I am making progress.

Several different workout types all follow this data structure. Whether I am doing the 100 Pushup Challenge or the 200 Situp Challenge or any of the other challenges, all of the workouts share identical data structures. I call this a SetRepWorkout (a workout consisting of multiple sets of repetitions with rest time in between) and modelled it as such:

data WorkoutType = Pushups | Situps deriving (Show, Read, Eq)
data SetRep = CSet { reps :: Integer, rest :: Integer, next :: SetRep }
| FSet { reps Integer }
data SetRepWorkout = SetRepWorkout { uuid :: Data.UUID.UUID,
date :: Data.Time.Calendar.Day,
workout_type :: WorkoutType,
description :: String,
sets :: SetRep }


In theory, I could have represented things like so:

data SetRep = CSet { reps :: Integer, rest :: Maybe Integer }
| FSet { reps :: Integer }
data SetRepWorkout = SetRepWorkout { uuid :: Data.UUID.UUID,
date :: Data.Time.Calendar.Day,
workout_type :: WorkoutType,
description :: String,
sets :: [SetRep] }


In fact, you will see later that I use this representation when storing the data. Arguably, this could have worked out better, but I used the original recursive definition. Even more, I probably could have left out the rest time as not informative.

Now, given the representation, we have to work on the ever-annoying functions to move the data into and out of the database.

When I started, I investigated the Persistent database architecture. I will likely investigate it again later, but I discovered that Persistent does not handle recursive data structures. When I told Persistent to save a SetRep structure, it saved the first CSet just fine, but everything in the next field ended up serialized into the next column in the database. I find this representation unacceptable as it forces all data queries to load the data into the objects, making ad-hoc queries impossible.

Changing the representation to non-recusive would have fixed the problem, but I would not have created nearly so much fodder to explain. And, ultimately, when doing complex data modelling we will all find a case in which the ORM breaks down and we must use SQL instead.

For the final application, though, I do not rule out the possibility of changing the representation. I may even drop the rest time by dropping the SetRep structure completely and making sets a list of Integers. If I make this change, Persistent becomes more viable, but still has some problems. I will discuss those down in my Conclusions.

# The really easy way

Most documentation online covers running queries and retrieving data, but it rarely focuses on retrieval beyond the basic fetchRow or fetchAllRows functions. Once you have called these functions (or fetchAllRowsAL or fetchAllRowsMap), you have SqlValues, not values relevant to your domain model. Consider this:

conn <- connectSqlite3 "example.sqlite"
stmt <- prepare conn "SELECT * FROM Workout WHERE uuid=?"
rows <- fetchAllRowsAL' stmt


This looks distressingly procedural, but I could have rearranged it into a »= pipe to make it feel less so. When we examine rows, in this case we see:

[[("uuid",SqlByteString "9d4f98cd-63eb-4fad-8d9d-070f191b72db"),
("day",SqlByteString "2012-04-18"),
("workout_type",SqlByteString "Pushups"),
("description",SqlByteString "Week 3, Day 3, Column 2")]]


Now, remember that I called fetchAllRowsAL'. This will fetch all of the rows (hence the outermost list that has only a single element), and return each row as an association list of (column name, value). Further, this particular function fetches all data strictly, so I don’t have to worry about laziness issues when I finalize the stmt.

So, extract just the interesting workout with

let row = head rows


Then we need to convert from [(String, SqlValue)] to Workout:

let uuid = fromJust $fromString$ fromSql $fromJust$ lookup "uuid" row
day = parseTime defaultTimeLocale "%Y-%m-%d" $fromSql$ fromJust $lookup "day" row workout_type = workouttypeFromSql$ fromJust $lookup "workout_type" row description = fromSql$ fromJust $lookup "description" row SetRepWorkout uuid day workout_type description (FSet 0)  This works, but it catches no errors. fromJust will throw an exception if it gets Nothing as an input. fromString :: String -> Maybe UUID return Nothing if it receives a string in the wrong format. So, two classes of errors: • column not present • invalid data I feel comfortable with allowing a column not present error to crash the application. The application expects a particular structure for the database and does not get it, so I consider it a programming error. But data in the database can get corrupted, and I do not want the program to crash because of an invalid data format. I would flag the error and do something with it in a higher level of the application. # A more robust (and almost as easy) way It turns out that a more robust implementation is almost as easy to implement, but figuring that out took me seven hours of experimenting. Notice that HDBC contains safeFromSql :: Convertible SqlValue a => SqlValue -> ConvertResult a. This leads naturally to the definition of Convertible and to ConvertResult. The code does not include an instance for Convertible SqlValue UUID, but such an instance would help. If you had such an instance, you could say this: let uuid = safefromSql$ fromJust $lookup "uuid" row  (Note: I left a fromJust. lookup will return Nothing if the “uuid” column does not exist, and I consider this a programming error. So, I actually want fromJust there to blow up the program in that event.) In fact, with proper instances, you could replace much of the above code like this: let uuid = safeFromSql$ fromJust $lookup "uuid" row day = safeFromSql$ fromJust $lookup "day" row workout_type = safeFromSql$ fromJust $lookup "workout_type" row description = safeFromSql$ fromJust $lookup "description" row  Now, each of the fields will contain Left (ConvertError ...) instead of exploding when the code encounters invalid data for that field. You cannot feed that directly into a SetRepWorkout and unravelling would be tedious, but ConvertResult b is just a type alias for Either ConvertError b, and so it functions in the Either Monad. So the entire function above can look like this: rowToWorkout :: [(String, SqlValue)] -> ConvertResult SetRepWorkout rowToWorkout row = do uuid <- safeFromSql$ fromJust $lookup "uuid" row day <- safeFromSql$ fromJust $lookup "day" row workout_type <- safeFromSql$ fromJust $lookup "workout_type" row description <- safeFromSql$ fromJust $lookup "description" row return$ SetRepWorkout uuid day workout_type desciption (FSet 0)


To make this work, you must ensure that each of the above conversions has a Convertible a b instance.

As it turns out, HDBC already provides Convertible SqlValue Day and Convertible SqlValue String. I simply had to provide Convertible SqlValue UUID and Convertible SqlValue WorkoutType. Here I provide those implementations, complete with the compiler flags necessary to even make them possible. I put the compile flags at the top of the file, but you could put them on the command line or in your Cabal file.

{-# LANGUAGE TypeSynonymInstances, FlexibleInstances, MultiParamTypeClasses #-}

import Data.ByteString.UTF8 a BUTF8 (toString)
import Data.UUID
import Data.Convertible.Base

instance Convertible SqlValue UUID where
safeConvert (Sqlstring a) = case fromString a of
Just b -> Right b
Nothing -> Left $ConvertError (show a) "SqlValue" "UUID" "Could not parse UUID" safeConvert (SqlByteString a) = safeConvert$ SqlString $BUTF8.toString a safeConvert a = Left$ ConvertError (show a) "SqlValue" "UUID" "No conversion available"

instance Convertible SqlValue WorkoutType where
safeConvert (SqlString a) = case a of
"Pushups" -> Right Pushups
"Situps" -> Right Situps
_ -> Left $ConvertError (show a) "SqlValue" "WorkoutType" "Unrecognized value" safeConvert (SqlByteString a) = safeConvert$ SqlString $BUTF8.toString a safeConvert a = Left$ ConvertError (show a) "SqlValue" "WorkoutType" "No conversion available"


Given these implementations, and the rowToWorkout function, the selectWorkoutByID function is not bad, though it could probably use some more refinement.

selectWorkoutByID :: IConnection a => UUID -> a -> IO (Either String SetRepWorkout)
selectWorkoutByID w_id conn = do
stmt <- prepare conn "SELECT * FROM Workout WHERE uuid=?"
execute stmt [toSql $show w_id] workout <- fetchAllRowsAL' stmt >>= mapM (return . rowToWorkout) return$ case length workout of
0 -> Left "No workout found"
1 -> case (head workout) of
Right w -> Right w
Left err -> Left \$ show err
_ -> Left "Multiple hits for a single workout Id"


# Conclusions

You will need this information. For one reason or another, you will one day encounter a case in which an ORM cannot function, and then you will need to go to SQL. The needs of the database absolutely should not dictate your domain model. Persistent annoys me because, in an effort to stick with “Don’t Repeat Yourself”, it actually forces database knowledge into the domain model, which I feel is a violation of the Onion Architecture. Ultimately, though, this is a simple application and such a violation may not be so bad.

This case, however, I used for practice and for teaching. The program is so simple that I really can explore the implications of the Onion Architecture and force in a level of architectural purity. I would like you to closely examine the code and consider for yourself whether such a setup makes sense in your application. Perhaps, when I learn TemplateHaskell, I will even develop a set of templates to describe this code and allow the compiler to figure it out for me.

More concerning to me, though, is that I am not sure how Persistent can be made to follow even First Normal Form when dealing with a composite data structure without modelling the data in some very odd ways. Now, I would accept this if I were going to a key-value store or a document database, but I’m going to a relational database and I want the option of running queries based on proper relations. In almost every relational database I build, I demand at least Second Normal Form, frequently also Third Normal Form, and possibly also some insights from even higher normal forms.

Model <-> Database code is annoying, tedious, and error-prone. I recently read an article by Patrick Loi, Dependency Injection Ceremony, and really hooked on the image of the Devil on one shoulder, the Angel on the other shoulder, and the Angel saying “You should be listening to that other guy”. Read the article, as it is hilarous and informative. It also is relevant here.

Martin Fowler has some objections to ORM Hate, and he is right in that rolling your own ORM almost never makes sense. On the other hand, the ORM always (no, really, ALWAYS) violates a good Onion Architecture because it forces you to define database representation in the middle of your domain model. Ew. Gross.

Your solution is going to vary. For now, due to the extreme pain of a project at work in which I definitely did not properly isolate the domain model from the database, I will practice quite a lot with the purest architecture I can get. After I have experienced the benefits and drawbacks of this purity, I will revisit the compromises I want to make on my projects.

I have the full set of code for this fitness project (which will slowly grow into an application) on Gitlab. If you have comments or feedback, please email them directly to me at savanni@alyra.org.