Purging Stale Branches from Gitlab

Written on 15 Nov 2019

Please note this post details how to permanently delete branches from Git - make sure you’re comfortable/confident in doing this!

Gitlab has a big red button in the repository branch list to Delete merged branches which prompts you to be really sure you want to delete them, because there’s no way back.

Unfortunately if you have a backlog of stale branches - say, 500 - then you might be a little nervous about hitting this button…just in case.

Instead of having to click through 25 pages of branch listings, we can use the Gitlab API to pull out our branches, pipe them through jq and pick out the branches we want to delete. Branches can then be deleted from the git CLI and hopefully the branch backlog starts to get a little clearer.

You’ll need a couple of things to get started:

  • Personal Access Token to pass through the API - find it via the “Access Token” submenu from your “Settings” page (find this by clicking your profile menu in the top right). The token needs to have api permissions
  • Project ID for the project with the stale branches. This is shown Project ID: NNNNNNNN at the top of your repo’s front page or in “Settings > General”
  • Something to submit the API request and save the JSON response(s)
  • The jq utility to query the data
  • Git CLI to push branch deletions

Getting your branch list by API

It’s quite straight forward, but there’s a limit on how many branches are returned so you need to page through the results (the API returns Link fields in the header).

Using the Branch endpoint you can get your branch list with the following

curl --header "PRIVATE-TOKEN: <your_access_token>" https://gitlab.example.com/api/v4/projects/<your_project_id>/repository/branches?per_page=100&page=1

100 seems to be the largest set we can get back, so inspect your Link field in the header to see how many you need to get and increment the page URL accordingly. For demonstration purposes, the JSON bodies of these requests can be saved to response-N.json where N is the page number.

Merging the JSON files

The jq tool can happily merge files:

jq -s '.[]' response-*.json > all.json

…Will merge the contents of each response array and put them into a single array under all.json. This makes it a bit easier to to query out.

Extracting the relevant fields

Because we’re looking to purge out really stale branches, we’ll pull out the branch name and committed date so we can sort the list

jq -r '.[] | .commit.committed_date + "," + .name' all.json | sort

This will create a list of oldest branches first - commit date and name separated by a comma. You can even add on a | pbcopy pipe if you’re on Mac and ready to paste these elsewhere.

Analysing stale branches

I’ve simply pasted these into a spreadsheet, split the text into columns and confirmed the dates and branches of names I want to delete. You can add a handy third column to generate the git command for you.

="git push origin --delete " & B2

This should yield a command git push origin --delete feature/name. Running this git command in your repo/directory will trigger a remote delete of the branch and you’re on your way to cleaning up your stale branch backlog.

Roda Framework (Ruby) Guide - Part 1

Written on 13 Jul 2019

Ruby is an amazingly expressive scripting language that has gained, largely due to Ruby on Rails, a huge following and developer ecosystem. So much so that a lot of people struggle to detangle Ruby (the programming language) from Rails (the framework). But for the new, or even seasoned, Ruby developer there’s a healthy market of non-Rails frameworks out there.

Having built web apps in Ramaze, Sinatra, Grape (API centric), Rails I now continue to be blown away - after 2 years - by the relative newcomer that is Roda. Originally forked from the Cuba microframework, Roda has been nurtured and furthered by Jeremy Evans to become a beautifully powerful web or API framework. If that name sounds familiar, he is also the maintainer of the Sequel ORM and numerous other highly visible Ruby and OpenBSD projects.

Why Roda

This post, and subsequent ones, isn’t about Roda vs. Rails/Sinatra/Hanami. It’s about why you might want to choose to use Roda, and helping you on that journey. Most mature Ruby web frameworks can be used to build APIs/CRUDs/CMSs and it’s very unlikely your users care one bit about your framework assuming the thing keeps running.

Roda is a great choice if any of these reasons resonate:

  • Start with a small, compact core of an application and grow complexity over time and learning as you go
  • Have fine grained control on features and performance
  • Use a routing-tree approach to represent your app instead of being persuaded towards index/view/edit controller routes
  • You don’t want to be overwhelemed with generator/boiler-plate code

Again, this isn’t a better-or-worse comment on Ruby web frameworks, but rather a journey into the fast and powerful Roda!

Your First Roda App

To get started we’re going to create a super simple app with the following directories and files:

├── Gemfile
├── app.rb
├── config.ru
├── routes
│   └── first-route.rb
└── views
    ├── index.erb
    └── layout.erb

You can copy and paste the lines below to re-create this structure:

mkdir 1-first-app && cd 1-first-app
mkdir views routes
touch Gemfile app.rb config.ru routes/first-route.rb views/layout.erb views/index.erb

It’s possible to create a one-file Roda app (config.ru) but this isn’t representative of an app you’d grow over time!

With the above structure created, open up Gemfile in your editor of choice and update it to match per below:

source 'https://rubygems.org'

gem 'roda', '~> 3.22'           # Roda web framework
gem 'tilt', '~> 2.0.6'          # Templating engine
gem 'erubi', '~> 1.5'           # Template syntax
gem 'puma', '~> 4.0'            # Web application server
gem 'rack-unreloader'           # Allows code reloading

I’ve added comments, but to briefly summarise - roda is the web framework, tilt is a generic interface to work with multiple template languages/syntaxes, erubi is an ERB implementation by Jeremy Evans (ERB is the defacto ruby template language supported in Ruby’s standard library), puma is the web application server (what Apache or Nginx would talk to) and rack-unreloader reloads modified code so you can refresh the browser and see code changes without having to cycle your app.

Once you’ve installed your gems with bundle install, open up app.rb so we can create the core of this first, basic app.

Setting up the Roda App

Update your app.rb file to look like the below:

require 'roda'
require 'tilt/sass'

class App < Roda
  plugin :render, escape: true
  plugin :hash_routes

  Unreloader.require('routes') {}

  route do |r|

So, what’s happening here? We’re greating a Class that inherits from Roda - this App class is what we’re going to later run via config.ru to serve the application.

You can see we’re loading a few plugins to introduce optional functionality to the app. As mentioned early, Roda allows you to build powerful and complex apps, but most functionality gets loaded via plugins to keep things lean.

The render plugin is Roda’s interface to the tilt library, and is responsible for rendering views and templates. Note that the plugin defaults to use the layout template defined in views/layout.erb unless you override it.

The hash_routes plugin is what we’re using to route URL paths to different blocks (i.e. bits of code). This lets us split our Controller logic into separate files and load them using Unreloader.require('routes') {} (this loads Ruby files in the routes directory). We’ll see shortly how the controller looks, and routing will start to make more sense.

Finally we set up the “master” route, and tell Roda to use the hash_routes plugin to route requests. Note that historically you might use the multi_route plugin to map routes to controller code, but hash_routes effectively supercedes it and offers more - and faster - routing features.

Roda Route + Controller

Even though Roda could be spun up in a single config.ru file and run with Rack, it doesn’t really help setup a structure for a real world app. So now you can open up routes/first-route.rb and update it to match the below:

class App
  hash_branch 'first-route' do |r|
    r.on 'hello' do
      r.get do
        @title = "Hello World"
        view 'index'

Here we have a very basic route and controller! The hash_branch method block goes hand-in-hand with the hash_routes invocation and tells Roda to use this block to handle web requests that start /first-route.

With the above block, attempting to access /first-route would give you a blank page and a 404 response code (we’ll get to proper error pages in a future post). However if you GET access to /first-route/hello you’ll get the contents of views/index.erb rendered through views/layout.erb with a @title class variable set as “Hello World”.

Remember that the the render plugin is defaulting to view/layout.erb. You could override an individual view method call with:

view 'index', layout: 'views/alternative-layout.erb'

A basic Layout + View

We’re not overly concerned with presentation here, so open up views/layout.erb and paste in the following:

<!DOCTYPE html>
    <meta charset="UTF-8">
    <meta name="viewport" content="width=device-width,initial-scale=1">
    <title><%= @title || 'My First Roda App' %></title>
    <%== yield %>

The yield keyword is where the content of your view will go - all fairly standard stuff if you’ve done any web app work before.

Our views/index.erb is going to be very simple…

<h2>Hello Roda!<h2>

We’re pretty much done - we just need to tell Rack (and Puma) how to load and run the application.

Running a Roda App

dev = ENV['RACK_ENV'] == 'development'

require 'rack/unreloader'

Unreloader = Rack::Unreloader.new(subclasses: %w'Roda', reload: dev){App}
run(dev ? Unreloader : App.freeze.app)

You can see here that we’re heavily leveraging Unreloader - this watches files and reloads the classes in the app if they’ve changed. It’s kind of like hot reloading, and makes development a much more sane experience. If you’re using other frameworks (e.g. Sequel) you’d pass in an extra class to subclasses so those classes are also reloaded.

The run keyword is a rack method call and is used by Puma to actually load and run the app.

Saving the above, then running bundle exec rackup in your terminal should fire up the app on (by default) port 9292 and you can access the route by heading to http://localhost:9292/first-route/hello

Note that you can also access /first-route/hello/foo/bar/blah and still get the same result. Why?! Well, our r.get directive under r.on 'hello' matches everything starting with /first-route/hello. If you change r.on 'hello' to r.is 'hello' you tell Roda to use an “exact match” on the routing tree and it becomes very strict. The verbs and routing tree take a little getting used to, and you may find it overly fussy moving between on and is and may choose to stick to one or the other.

If you’re using the r.is you probably want to update your plugins to use plugin :slash_path_empty. This route matching method is an exact match and won’t match if anything is trailing the match. For example:

# Without slash_path_empty
r.is 'foo'        # /foo will match, /foo/ will 404

# With slash_path_empty
r.is 'foo'        # /foo will match, /foo/ will also match

Given you can’t stop a user typing in a URL, you probably want to avoid sending them to a 404 for the sake of a trailing slash!

A Note on Application Structure

What we’ve built above is obviously very basic, but matches quite similarly to the official conventions for structuring a larger app. One deviation is I prefer to use app.rb for the application file as it reduces cognitive load when searching and opening the file.

We’re still missing a few key elements that we’ll step through in a future post, though:

  • Style and Script assets - Roda has a great :assets plugin which can pre-compile and cache Sass and Javascript (and more)
  • Models and Database connectivity (and migrations)
  • Helper modules/classes to help keep our app DRY
  • Test files

I’ll be covering up more Roda in a future post!

Ultimate PostgreSQL Slug Function

Written on 13 May 2019

Having deployed a fair few web apps, APIs and utilities, I’ve grown more familiar and comfortable with PostgreSQL. As that familiarity has grown, I’ve started moving more functionality into the database; particularly using triggers/functions on UPDATE/INSERT as well as pre-creating JSON(B) objects that can effectively skip the application layer logic and be sent straight to the client.

Slug generation is low hanging fruit where PostgreSQL can excel. Moving this functionality to the database gives you the guarantee that no matter how rows are inserted/updated, they will get a consistent slug.

What is a slug?!

If you’re reading this, you probably know, but a slug is a URL/SEO friendly representation of a string. If for example you wrote an article:

The World’s “Best” Cafés!

You’d want a slug function to create a representation that looks a bit like:


Granted browsers can probably handle the accented “e” but it’s good to be over conservative for backwards compatibility.

PostgreSQL slug function

The most comprehensive slug function I’ve seen for Postgres (and in general) is by ianks on Github. Slug functions are one of those things that never seems to have a definitive “best and final” version, but this has been very reliable for me.

It’s verbose so you can feel comfortable modifying it, it’s pretty robust and it leverages the unaccent extension in Postgres to help with transcoding/transliterating non-ASCII characters (i.e. it’ll convert é => e for you).

One limitation of this function, though, is that it expands quotes (single and double) into a hyphen which - for me - is overkill. I’ve subsequently tweaked the function:


  -- removes accents (diacritic signs) from a given string --
  WITH "unaccented" AS (
    SELECT unaccent("value") AS "value"
  -- lowercases the string
  "lowercase" AS (
    SELECT lower("value") AS "value"
    FROM "unaccented"
  -- remove single and double quotes
  "removed_quotes" AS (
    SELECT regexp_replace("value", '[''"]+', '', 'gi') AS "value"
    FROM "lowercase"
  -- replaces anything that's not a letter, number, hyphen('-'), or underscore('_') with a hyphen('-')
  "hyphenated" AS (
    SELECT regexp_replace("value", '[^a-z0-9\\-_]+', '-', 'gi') AS "value"
    FROM "removed_quotes"
  -- trims hyphens('-') if they exist on the head or tail of the string
  "trimmed" AS (
    SELECT regexp_replace(regexp_replace("value", '\-+$', ''), '^\-', '') AS "value"
    FROM "hyphenated"
  SELECT "value" FROM "trimmed";

Now you can run the earlier example:

-- Having to escape the quote here to demo...
select slugify('The World''s "Best" Cafés!');

And you get:

| slugify               |
| the-worlds-best-cafes |

Using the slug function

Typically you will only want to create a slug when a record is created. If you update the title of something, you probably want to preserve the slug as it may well be part of your websites’ URL structure. You could update your slug each time, but you’d need an audit table to redirect people from the previous slug(s) to the new ones.

Depending on your schema, you can have a function and trigger for each table that requires a slug, OR if you have consistent column names over tables you can create a generic function that faithfully assumes you’ve got a title and a slug column:

CREATE FUNCTION public.set_slug_from_title() RETURNS trigger
    LANGUAGE plpgsql
    AS $$
  NEW.slug := slugify(NEW.title);

This function returns a trigger, whilst the slugify function returns text. The NEW keyword above is effectively referencing the table row we are updating.

Note that the above function will happily generate duplicate slugs. You could append an ID, hashid or some other bit of random text to shoot for uniqueness.

And finally, to add this trigger to your table(s)…

EXECUTE PROCEDURE set_slug_from_title();

You can now INSERT into news a new row with a title set and when you commit, you’ll see a slug appear that is a URL safe, sensible representation of the title.

I’ve forked the original gist and put it here.

IP Geolocation in PostgreSQL

Written on 10 May 2019

Still using Go/Ruby/Python/Node to do IP geolocation? Time to think about how PostgreSQL can take the load for you! (N.B. probably only relevant if you use Postgres in your stack, of course…)

Firstly, IP Geolocation is the process of finding the geographical location (i.e. where in the world) an IP address is allocated. Typically if a user is accessing your service from an IP address that geolocates to “Camden, London, UK” you can be reasonably confident that the user is in the Camden area of London.

Secondly, IP Geolocation is not 100% reliable. IP address allocations “move around” so it’s important to treat the results with a small pinch of salt. If you try and resolve location to a more granular location (i.e. city instead of country) you will introduce more uncertainty. Results can also become out of date quickly if you do not keep your source data up to date.

What are we going to do?

To demonstrate IP Geolocation we are going to create some tables to (a) store and update IP address ranges and (b) store geographical reference data.

We can then write a query that takes an IP address as a parameter and gives you the country code it’s allocated to. We will also look at an update process to keep our IP address data to up to date.

We are going to focus on Country level resolution, but it is fairly straightforward to take these ideas and apply them to City level data.

Getting Started

We can use PostgreSQL to geolocate IP addresses thanks to the hard work of others:

  • ip4r is a PostgreSQL extension by RhodiumToad that makes querying and indexing of IP addresses very easy. Despite the name it also does support IPv6

  • GeoLite2 by MaxMind is a free IP geolocation database that supports city and country resolution. This is the defacto IP reference data. At the start of 2019 the GeoLite data was sunset and is now legacy, so we need to work off the new GeoLite2 structure

  • GeoNames by … GeoNames - a creative commons licensed geographical database (where places are in the world). Since the move to GeoLite2, Maxmind includes a GeoNames identifier which you need to cross reference to work out the city/country.

The first step is to download and install the ip4r Postgres extension. It is distrubuted as source code, so you need to make and make install the extension; typically the latter will require root privileges.

Activating the ipr4 Extension

Because we need to create a new table with field types provided by this new extension, we need to make sure it’s activated in our database. So, with a postgres user role that has permissions to create an extension, select your database and run:

create extension if not exists ip4r;

Assuming this ran without issue, we’re ready to setup our tables.

Table Setup

We need a table to store our geographical reference data (from GeoNames, but Maxmind kindly package it up in their data, too) and then a table to store our IP addresses. We’re going to create 2 tables for our IP data so we can bulk load our raw data and only update rows that are new or have been updated.

create table if not exists ref_geonames_countries (
    geoname_id bigint primary key,
    locale_code varchar,
    continent_code varchar,
    continent_name varchar,
    country_iso_code varchar,
    country_name varchar,
    is_in_european_union boolean  -- boo Brexit (my view only)

These column headings lazily reference the CSV provided by Maxmind because it’s easier to bulk load that way. Note this provides names in English - if you want regionalised/local names or names in other languages you’ll need to source the data direction from GeoNames.

-- Data gets bulk loaded here
create table if not exists ref_ip_blocks_tmp
  iprange iprange,
  geonameid int,
  registered_country_geoname_id int,
  represented_country_geoname_id int,
  is_anonymous_proxy bool,
  is_satellite_provider bool


-- Incremently update this table
create table if not exists ref_ip_blocks
   iprange        iprange primary key,
   geonameid      integer
create index ref_ip_blocks_ip4r_idx on ref_ip_blocks using gist(iprange);

The iprange column is unique by the nature of it being a primary key. This is good for us as we want to use the ON CONFLICT upsert feature in Postgres to insert new rows or update changes rows when we refresh our reference data.

Getting Data

All the reference data we need comes courtesy Maxmind. We’re going to download and unzip it:

wget "https://geolite.maxmind.com/download/geoip/database/GeoLite2-Country-CSV.zip"
unzip -o GeoLite2-Country-CSV.zip

We’ll now have the following CSV files:


These three files are all we need to load up our reference data and get geolocating.

Loading Data

First up we’ll load the GeoNames reference data. We haven’t bothered with any upserting here. From a psql session in your relevant database:

truncate table ref_geonames_countries;
copy ref_geonames_countries from '/path/to/CSV/Files/GeoLite2-Country-Locations-en.csv' HEADER CSV DELIMITER E',';

If you’re doing this on a system that is continusouly online you’ll want to do it in a more transactionally safe manner (i.e. temporary tables, then a drop/rename or truncate/insert). Note that COPY FROM appends data, so we want to clear it down first.

Next up is our IP reference data, which is a tad more complex:

truncate table ref_ip_blocks_tmp;
copy ref_ip_blocks_tmp FROM '/path/to/CSV/Files/GeoLite2-Country-Blocks-IPv4.csv' HEADER CSV DELIMITER E',';
copy ref_ip_blocks_tmp FROM '/path/to/CSV/Files/GeoLite2-Country-Blocks-IPv6.csv' HEADER CSV DELIMITER E',';

-- ref_ip_blocks_tmp now contains a clean, up to date set of IPv4 and IPv6 data

insert into ref_ip_blocks (iprange, geonameid)
    iprange, geonameid::integer
    geonameid != 0
on conflict (iprange)
    do  update  set geonameid = excluded.geonameid;

This effectively adds and refreshes the country code allocations for IP ranges. To take a look at how many allocations per country you can run this query:

    JOIN ref_geonames_countries ON geonameid = geoname_id
    count DESC

The top of the table is pretty predictable:

| geonameid | country_name   | count |
|   6252001 | United States  | 99934 |
|   3017382 | France         | 28527 |
|   2635167 | United Kingdom | 25639 |
|   2921044 | Germany        | 23590 |
|   6251999 | Canada         | 17149 |
|   2750405 | Netherlands    | 16746 |

The tail probably isn’t surprising, but still interesting to look at:

| geonameid | country_name                   | count |
|   1873107 | North Korea                    |     8 |
|   4031074 | Tokelau                        |     7 |
|   2081918 | Federated States of Micronesia |     7 |
|   4034749 | Wallis and Futuna              |     7 |
|   2078138 | Christmas Island               |     6 |

For the data I’ve used, there are unfortunately around 10,000 ranges that do not have geonameid attached to them. We can see the country they are registered in by looking at the ref_ip_blocks_tmp table. I’m not familiar enough with IP addressing to understand why this is.

Now, Let’s Geolocate an IP

Now we can run a join query to pull the country name/code of a given IP address. For this I picked a random Facebook server IP via Google:

        join ref_geonames_countries on geonameid = geoname_id
    iprange >>= ''::ip4r;

What on Earth is >>=, you might ask? It’s a CIDR/network operator in core Postgres that means “contains or equals”. In this context we’re asking if the iprange value CONTAINS the IP

And hey presto:

| iprange        | country_name  | country_iso_code |
| | United States | US               |

I’d advise sticking to the 2 character ISO code when geolocating rows; it’s more compact and can be almost universally joined onto other reference data. There is a 3 character ISO code, but I’ve personally always worked with 2 chars.

A typical use case here will be batch geolocating a table of data with IP addresses in. You can do an UPDATE FROM in Postgres to join the relevant tables:

    my_table_with_ips.country_iso_code = country_iso_code
    inner join ref_geonames_countries on geonameid = geoname_id
        and iprange >>= my_table_with_ips.ip_address_column::ip4r
        and country_iso_code is null;

Staying up to date

If you’re relying on reasonably accurate geolocation, you’d do well to put a scheduled job in place NOW that keeps your reference data up to date. Maxmind update the CSV file on the First Tuesday of every month. If you have historic data to geolocate, you’re probably out of luck as (fairly) Maxmind don’t publicise archives of this free data.

Accurate Cost of Sales for Specialty Coffee Shops

Written on 18 Jan 2019

In the world of specialty coffee shops there are a handful of multi-shop operators; some of them straddle borders, but most of them are local to a country or city. The tail of the specialty coffee shop sector is, then, lots of single owner-operator shops (or two shops, if the first has gone well).

It’s quite a big market and has a large tail which is very fragmented. Having been working in coffee shops and technology (it’s a thing!) for the best part of 6 years, I’ve come to the conclusions that there are few technologies/systems that really represent the nuances of running a specialty coffee shop.

One particular irritation I have is around inventory tracking and cost of sales for coffee shops. This is a very niche irritation, I admit, but undermines my confidence in any inventory system that is “designed for coffee shops”. I’ve seen this in huge enterprise systems and small disrupter/start-up systems. I’ve seen it have an impact at scale, then I’ve seen it ignored at scale because it came out in the wash with each unit’s PnL.

Briefly, and at a high level, an inventory system should show you how much you spend on your ingredients, how many and at what price you sold your finished products for and then ultimately what your profit was.

Extending this to specialty coffee shops, you’d want an inventory system:

  • to show you how much your milk and coffee cost you
  • to let you specify that a flat white takes 220ml of milk and 18g of coffee (ideally with some small % wastage/loss)
  • report that your gross margin on a flat white is 75% (e.g.)
  • report that your overall margin on a flat white on a certain day was 70% (because, you know, free ones and discounts)

Pretty much every inventory system can do this.

Now what happens, say, when you sell a flat white with oat milk?

Typically you’d just put an “Oat” modifier on your point of sale, right?

Remember, your inventory system recognises a “flat white” as being made of whole milk and coffee.

When that sale hits your inventory system, your costs are going to include 220ml whole milk, 18g of coffee and now 220ml oat milk as a modifier. Your overall margin is now being falsely reduced.

Now someone wants a decaf flat white with oat milk. Your margin drops again because the system calculates costs for regular coffee as well as the decaf modifier coffee. You now have positive shrink on whole milk and regular coffee (you have more milk in your fridge than you inventory system thinks).

In a busy owner-operated environment, as many specialty shops are, I understand this is rarely going to be the top of the list of worries or frustrations, but it slowly builds up an underlying distrust of your key financial metrics. Great that you’ve got margin reporting on food, but roll it up with your coffee that probably accounts for 50% of revenue and your ability to make evidence based decisions is severely limited.

So, what’s the solution?

I haven’t seen a perfect solution to this problem; certainly not one that comes off the shelf.

Stock take to make up for it Being disciplined and doing a stock take on a regular basis will help keep margins under control, but this only works at the business level - your coffees will still be misrepresented.

Create a menu item for every possible combination

A menu item lives in your inventory system and is usually represented by one button on your point of sale (with optional modifiers).

It’s possible, but not something to be encouraged. Just with a flat white, decaf and oat you’d need 4 buttons. Factor in other m!lks, other drinks and you soon have an untameable point of sale.

Manually import negative modifier sales

On a very strange day of trading, let’s say you only sold 100 decaf oat milk flat whites.

Your inventory system thinks you’ve given out:

  • 22 litres of whole milk
  • 1.8kg of regular coffee
  • 1.8kg of decaf coffee
  • 22 litres of oat milk

You can imagine your margin is going to be all over the place. You didn’t give out any whole milk or regular coffee!

If your system supports it, however, you can tell it that you’ve ALSO sold -100 “whole milk measures” and -100 “regular coffee measures” where these “menu items” have a single milk and coffee ingredient respectively. Given an sympathetic inventory system (some are, so are not), your flat white margin is back to where it should be!

Note that in food service hospitality systems, the concept of negative quantity sales is a little odd. At a dumb level, this suggests you’ve given a food item out then taking it back into your stock. Typically refunds and voids would mean you’ve given out a food item then binned it due to some other issue.

More intelligent inventory systems

The ultimate solution, in my opinion, is a more intelligent inventory system that lets you define the composition of a menu item. All your milk/m!lk options are sat in parallel as options on your flat white, so that they are interchangeable as ingredients on your menu item.

I haven’t seen this done (maybe it does exist?!), but would make cost of sales reporting in specialty coffee shops much happier.

Not restricted to coffee

I can’t see how this is restricted to coffee, either. If I want a hawiian pizza with chicken instead of the ham, does the restaurant’s inventory system record usage of both chicken and ham?

As a percentage of the menu item ingredients I expect this pizza example is more diluted than with specialty coffee (two ingredients forming the total cost of the product), but it would still be a poblem for restaurant operators at scale.

Got a solution?

I’d love to hear about it!

Copyright © 2006 - 2019 Kester Dobson