Luminescent Dreams

Mononix

July 09, 2023

As though I don’t have enough projects, I actually have another one.

For the longest time, I’ve had a huge amount of frustration with Nix as a developer, because normal build tools are hostile to Nix, and the Nix build tools are fractious and don’t really function well together, or work with the build tools for other languages. Just in Rust, I have tried buildRustPackage, buildRustCrate, Crane, crate2nix, and cargo2nix. And then in Javascript I’ve worked with node2nix and npm2nix, and buildNodePackage. There are no tools in C, though arguably C applications are the easiest of all things to build in Rust. Given that C doesn’t have a package manager, so there is no need to fight against the toolchains in order to build a C app in Nix with well-defined toolchains and dependencies.

Basically, C apps are the easiest ones to build and package with Nix. In many ways, that also makes them the easiest ones to distribute.

Most other applications, though, are a total pain. The best way I’ve generally found to distribute a Rust application is to build it entirely outside of Nix, upload the binary to somewhere, and then have the Nix derivation download, patch, and install it.

Technically, that’s the right way for end users.

As a developer, though, I really need to be able to use Nix as a toolchain. I want to be able to build up some reasonable layers atop of all of the things that I build so that I can piece them together with quite a lot less pain.

So, I’m going to try this. I have barely any of the necessary skills, except insofar as I have built applications many times, I have used Nix, and I have even written a couple of Nix derivations.

I’m calling the project Mononix, because I will be using Nix to build a monorepo.

I’ve set up demo-repo as the test setup. I’ll be using this to figure out how to build a bunch of projects without all of the complications of my existing projects.

If this is something that interests you, please consider whether you would like to join the project, as a tester, as an advisor, or even as a contributor. I could also use a cheerleader.

Carrying out a View Model Request

April 07, 2023

Happy Friday, everyone!

Last week, I talked about view models in my Kifu application. This week, I’ve continued on by actually implementing the most important thing that can happen in a Go application: being able to put a stone on the board.

For this article, let’s talk put this request/response architecture into practice. You can watch me explaining and implementing this in my stream, Putting a stone on the board.

The Request

Let’s review the request itself:

#[derive(Clone, Copy, Debug)]
pub enum IntersectionElement {
	Unplayable,
    Empty(Request),
    Filled(StoneElement),
}

#[derive(Clone, Debug)]
pub struct BoardElement {
    pub size: Size,
    pub spaces: Vec<IntersectionElement>,
}

#[derive(Clone, Copy, Debug, PartialEq, Eq)]
pub enum Request {
    PlayingField,
    PlayStoneRequest(PlayStoneRequest),
}

So, when I click on a space with an IntersectionElement::Empty value, the contract between the core and the UI states that the UI should simply dispatch that particular request to the core. Let’s say the request is Request::PlayStoneRequest(PlayStoneRequest{ column: 3, row: 3 }). Here’s what happens:

  • The UI dispatches the request to the core, then returns to normal operations.
  • The core does whatever processing is necessary. In this case, it involves putting a stone on the board, figuring out what, if any, groups get captured, maybe sending the move across the network to another player, and then building an updated representation of the board.
  • The core communicates the updated representation to the UI.
  • The UI, upon receiving this message from the core, re-renders itself with the new representation.

Today we will skip all of the magic involved in evaluating the new board state, instead just being naive and placing the stone on the board No Matter What. The question becomes, with a decoupled system, how do we get this flow across application boundaries?

The FFI

This is the FFI that I have written, but it’s important that I emphasized that this is customized for a Rust GTK application. GTK is not thread-safe, and Rust represents this by not allowing any components to be shared across thread boundaries (components are not Sync or Send).

Additionally, it is convenient to have all GTK processing occur in the main loop of the application. Since my core code is inherently asynchronous, the easiest tool I could reach for is the custom channel type that GTK-RS provides:

move |app| {
    let (gtk_tx, gtk_rx) = gtk::glib::MainContext::channel::<Response>(gtk::glib::PRIORITY_DEFAULT);
    ...
}

Rust channels, including this custom one, are Multi-Producer-Single-Consumer, and in Rust we enforce this by allowing the sending channel to be cloned as much as we want, but not allowing the receiver to be cloned. Only one process can own and process messages that arrive on the receiver, but anything can send to that receiver. So I’m using the above channel to allow core operations to send their results back to my UI, and my UI just processes those results as they arrive.

move |app| {
    let (gtk_tx, gtk_rx) = gtk::glib::MainContext::channel::<Response>(gtk::glib::PRIORITY_DEFAULT);
    ...
    gtk_rx.attach(None, move |message| {
        match message {
            Response::PlayingFieldView(view) => /* do everything necessary to update the current UI */
        }
    });

And then on the other side, I have the function that dispatches requests into the core:

#[derive(Clone)]
pub struct CoreApi {
    pub gtk_tx: gtk::glib::Sender<Response>,
    pub rt: Arc<Runtime>,
    pub core: CoreApp,
}

impl CoreApi {
    pub fn dispatch(&self, request: Request) {
        self.rt.spawn({
            let gtk_tx = self.gtk_tx.clone();
            let core = self.core.clone();
            async move {
                let response = core.dispatch(request).await
                gtk_tx.send(response).await
            }
        });
    }
}

Dispatch spawns off a new asynchronous operation that is going to handle all of the processing. Whatever calls dispatch, is now free to continue doing whatever it needs to do. UI elements generally cannot wait around for a response, so I don’t even give them the option to. Under the hood, though, this task will send a request to the core, await the response, and then send the response to the gtk_tx channel, the one I defined above.

The Core dispatch function looks pretty pedestrian. You could find something like this in the server side of any client/server application that references a global shared state.

pub async fn dispatch(&self, request: Request) -> Response {
    match request {
        Request::PlayStoneRequest(request) => {
            let mut app_state = self.state.write().unwrap();
            app_state.place_stone(request);

            let game = app_state.game.as_ref().unwrap();
            Response::PlayingFieldView(playing_field(game))
        }
        ...
    }
}

Looking to Kotlin

I’ve never written more than a few trivial lines of Kotlin and an Android application. I have a lot in front of me on that topic. The FFI that I wrote above cannot work on Android. GTK does not exist there, so the communication channel cannot work. CoreApi is a class that I put into the GTK application, not in the Core. While I do not yet know what the Android FFI is going to look like, I actually imagine that I’m going to build a library that provides Android bindings and internally manages the runtime executor.

Wrapping up

What I showed here is a mechanical nuts-and-bolts illustration of a basic concept:

  • The UI doesn’t know what requests it has. It simply knows, by contract, that when the user clicks in a place, the UI needs to find the request associated with that space and send it to the core. It does not wait for a response, because there is no way to know how long the request will take to resolve, and the UI must remain responsive. This is put into the definition of the particular UI element.
  • The dispatch function creates an async task for processing the request.
  • The dispatch function also knows how to send the response back to the UI. The GTK UI has one place in which it can process all such asynchronous signals.

Over the next few weeks, I face the task of implementing the rules of Go, and also setting up an Android toolchain.

View Model Architecture in the Kifu

March 31, 2023

Good morning and Happy Friday, everyone!

I’ve continued on with a couple of weeks of working on a new Go application (which I currently call Kifu, after the traditional name for a record of a game of Go). I have a set of goals for this application that mean that I have to make a very disciplined architecture, since I want to run the application on both Linux and on Android.

User Stories

Before I get into the topic, I’m going to start out with a short list of the user stories that I’ve written for the application.

  • As a player, student, or reviewer, I want to be able to keep and study a database of game records.
  • As a player or a reviewer, I want to annotate a game record.
  • As a player or game recorder, I want to be able to record a game as it progresses.
  • As a player, I want to play against another player on a shared device.
  • As a player, I want to play against another player over the network.
  • As a player, I want to play against an AI.
  • As a player, I want to play against people on OGS and KGS.

This is where I’m going with the game. I’m currently at the stage of getting a basically interactive Goban. Since this is the initial architectural stage, today, I’m going to talk about one of those key decisions, which is the Core and View Models.

Business Logic and User Interface

The idea of keeping a strict division between Business Logic and User Interface is nothing new. What is new here is an architecture that we use at 1Password, and which the folks at Airbnb have used for their website. In these architectures, the user interface actually makes even fewer decisions than normal, being gradually reduced (as much as is viable) to a renderer.

A View Model in this context is a block of code which the core emits which describe a user interface in the abstract. For example, for the playing board above:

#[derive(Clone, Debug)]
pub struct StoneElement {
    pub color: Color,
}

#[derive(Clone, Debug)]
pub struct GobanElement {
    pub size: Size,
    pub spaces: Vec<Option<StoneElement>>,
}

When anything interesting happens, the Core will construct a new copy of this structure and ship it to the UI, which is responsible for rendering it:

pub struct GobanPrivate {
    drawing_area: gtk::DrawingArea,

    current_player: Rc<RefCell<Color>>,
    goban: Rc<RefCell<GobanElement>>,
    cursor_location: Rc<RefCell<Addr>>,
}
impl ObjectSubclass for GobanPrivate { ... }
impl WidgetImpl for GobanPrivate {}
impl GridImpl for GobanPrivate {}

glib::wrapper! {
    pub struct Goban(ObjectSubclass<GobanPrivate>) @extends gtk::Grid, gtk::Widget;
}

Again, this is nothing particularly controvesial. What becomes more interesting is this:

pub enum Request {
	PlaceStone(u8, u8),
	PlayingField,
}

pub enum Response {
	PlayingField(PlayingFieldView)
}

#[derive(Clone, Copy, Debug)]
pub enum IntersectionElement {
	Unplayable,
    Empty(Request),
    Filled(Color),
}

#[derive(Clone, Copy, Debug)]
pub struct GobanElement {
    pub size: Size,
    pub spaces: Vec<IntersectionElement>,
}

So, now there’s something new. The Intersection could be a stone, in which case the UI needs to know to just render it; or it could be Empty, in which case there is an action associated with it. This action is the action which the UI must send to the core if the user clicks on that location.

This is important. The UI only knows (and this only by contractual design of the Goban) that an empty intersection point can be clicked upon. On click, there is an opaque request that the UI should send back to the Core.

The UI knows nothing about what is in this request.

While by itself this may seem odd but small, taken to a greater extent we slowly move behavior out of the UI and into the Core. In doing this, suddenly there is less to implement when I add the Android UI. The Android UI, like the GTK UI, needs only to render the Goban and then on click it needs to send the request to Core and render whatever Core returns. Core will handle all of the logic of deciding whether the move was valid and evaluating how the game changes as a result of that move. Do stones get removed from the board? The new view model will no longer have those stones. Is the move suicide, and do the rules forbid suicide moves? The new view model will contain no changes, except maybe a warning to the user.

Intersection Element

So, for a moment, let’s talk about this contract more explicitely. I’ve not documented this anywhere, so I’m figuring it out as we go.

#[derive(Clone, Copy, Debug)]
pub enum IntersectionElement {
	Unplayable,
    Empty(Request),
    Filled(Color),
}
  • An IntersectionElement only makes sense in the context of a Goban and cannot appear in any other user interface element.
  • An IntersectionElement::Unplayable space cannot be played. The UI must not respond to clicks on this space and should not render a ghost stone on the space.
  • An IntersectionElement::Empty space is playable. The UI should render a ghost stone of the current player color when the cursor is over that space. If the user clicks on the space, the UI should send the Request to Core.
  • An IntersectionElement::Filled space is unplayable. The UI should render a stone of the specified color there.

And thus, the UI is now free to make decisions about all of the details of rendering, and otherwise needs to make no logical decisions whatsoever. In fact, if it really treats the Request as an opaque data structure, we can change the Request to something which contains more information without changing any of the user interfaces.

Keeping it going

As always, the devil is in the details. This model doesn’t work flawlessly everywhere and some logic does need to get encoded into the UI. It is, however, much less than you might think, and you quickly get to reap the benefits of this as soon as you have a second user interface.

The Project

Kifu, though lacking all of the necessary file headers, is a GPL-3 project and available on my Gitea instance. Feel free to tak a look at it, and contact me if you wish to participate in some way.

Let me know, either on Mastadon or on Matrix, if you’re excited to use this app. I would love some positive input and even feature requests.

Happy Friday, and enjoy your weekend.


Kifu

Contact me

  • Matrix: @savanni:luminescent-dreams.com
  • Mastadon: @savanni@anarchism.space (NSFW) or @savanni@esperanto.masto.host (Esperanto-only)
  • Email: savanni@luminescent-dreams.com

Coding Together for March 30th, 2023

March 30, 2023

For the last several months, I’ve been doing a weekly one-hour coding stream. Starting this week, I’ve moved it to Thursday night (tonight!) at 8pm EDT (0000 UTC).

In the last stream, I talked about a Kifu, what it’s for, why I’m trying my hand at building an application, and then I spent the rest of the stream designing the application in HTML form. Tonight, I’ll explain the architecture of the application as I have built it, and my intent for why I’ve architected it this way. Then we will dig into implementing the very first instruction: that of placing a stone on the board.

Catch you tonight on my Youtube channel: https://www.youtube.com/@SavanniDGerinel/streams

Transformed

February 22, 2023
  • Snowglow, 2019-12
  • December, 01, 2019

Weather changes can completely transform a familiar space. I feel that snow can make the biggest (temporary) transformations.