The Voxgig Podcast Chatbot: Triggering Ingestion, and some Debugging DX

This is the third post in a series I’m writing about a new Minimal Viable Product we’ve released at Voxgig that turns your podcast into a chatbot. Your visitors can now ask your guests questions directly! The first post is here: Building a Podcast Chatbot for Voxgig and you find the full list at the end of this post.

The problem with podcasts is that they tend to have new episodes. This is annoying because you can’t just ingest all the episodes once and be done with it. You have to set up a regular process (once a day is sufficient) to download the podcast RSS feed, check for new episodes and changes to old episodes, and run part of the data ingestion process again.

To model this business logic, the system uses two separate messages:

  • aim:ingest,subscribe:podcast: subscribe to a podcast, called only once to setup the podcast in the system
  • aim:ingest:ingest:podcast: actually process each episode and ingest the content

The ingest:podcast message is the one we will run on a daily basis. For the moment, let’s ignore the devops side of this system (it’s still under development at the time of writing), and focus on processing the episodes of a podcast.

The podcast RSS gives us the list of episodes, so we need to download the RSS to get the latest version. This does mean that on the initial subscription, in our system, the RSS gets downloaded twice. We can easily solve this problem by adding a parameter to the ingest:podcast message (and we may yet do that in future), but for now, we are deliberately not going to solve this problem. The other more important parts of the codebase to work on. Downloading the same RSS twice to complete a one-time business process is not exactly a fundamental issue. Let’s invoke “premature optimization is the root of all evil” and leave it at that for the moment.

This little decision lets us keep the ingest:podcast message implementation simpler for now. Let’s look at the code, but this time also think about error handling. This message assumes we have already subscribed to a podcast in our system. That, for the moment, just means we saved an entry in the pdm/podcast table.

Since we have abstracted away the database (see the previous post in this series), we’ll use the term table to refer to a generic idea of an entity store that can persist records that look like JSON documents, and can query these records, at least using to level properties. That’s pretty much all we need. We use the convention entity-base/entity-name to describe these entity stores and give them some namespacing.

The podcast entry must exist, so we need to know its ID. This must be a parameter to the message action, let’s call it podcast_id. We try to load this podcast and need to respond with an error if it does not exist.

Is this error a system error, one that is in some way fatal to the process, or would leave to corrupted data? Nope. This is an ordinary everyday business logic bug. Somehow a podcast ID got sent to us that isn’t correct. We just reject it, but there’s no need to panic. Thus, we don’t throw an exception, or return a bad HTTP error, or do anything out of the ordinary. We just respond with a JSON message that indicates, by a convention of its schema, that there was a business problem.

The general schema that we use for message responses (when there is one!) is:

  ok: boolean,  // If true, message was accepted and processed successfully
  why: string,  // If ok is false, optionally supply a free-form debugging code useful to humans
  // Other properties, perhaps well-defined, providing additional response data

Let’s put all this together in code:

async function ingest_podcast(this: any, msg: any, meta: any) {
  const seneca = this
  const debug = seneca.shared.debug(meta.action)

  const out: any = { ok: false, why: '' }

  const podcast_id = out.podcast_id = msg.podcast_id

  debug && debug('START', podcast_id)

  let podcastEnt = await seneca.entity('pdm/podcast').load$(podcast_id)

  if (null == podcastEnt) {
    out.why = 'podcast-not-found'
    debug && debug('FAIL-PODCAST-ENTITY', podcast_id, out)
    return out

  // Process episodes here...

  return out

It’s useful to have a “debug” mode for your microservices, that can produce additional log entries for debugging, both locally and when deployed. Some log entries might involve additional work to generate, so you want to avoid that when running normally.

Also, it’s tedious to keep adding context information to each log entry, such as the name of the microservice, the name of the message action function, and so on. Thus we make use of a shared utility that provides a debug function: seneca.shared.debug, and we pass in the current message details (the meta parameter) so it can generate a full log entry.

If debug mode is not enabled, seneca.shared.debug will return null, so we can use that to short circuit any costly log entry code, using the idiom:

debug && debug(...)

We’ll cover shared utilities in a later post, but if you want to look at the code, review ingest-prepare.ts.

Our code tries to load the podcast, and if it is not found, bails immediately by returning the response:

  ok: false,
  podcast_id: '...',
  why: 'podcast-not-found'

We do a few things here to make debugging easier. When deployed, debugging becomes harder, so the more information you can give yourself, the better.

As well as the expected ok and why properties, we also include the podcast_id parameter in the response. In a system under high load, you might only see the body of the response in the current section of the log that you’re looking at, so give yourself as much context as possible.

For debug mode, we also emit a log entry, and add a test prefix that is easier to identify or search for: "FAIL-PODCAST-ENTITY". Ultimately you’ll want to look at the line of code that caused the problem, but having a unique string is a great help when isolating the flow of the error. It also helps a great deal with working remotely with colleagues. This is a poor substitute for proper error codes for sure, but is a classic 80/20 solution that can be improved as needed.

You’ll also notice some defensive pessimistic code here. We assume the message will fail, and initialize ok: false. This also helps us be lazy developers, as we only have to set the why property when returning early due to an error.

Most error conditions in this codebase are handled in the same way, and you can see them in the source code. We’ll omit them in most cases it the rest of this series to keep the example code shorter.

Let’s proceed to the next step: download and save the RSS:

  const batch: string = out.batch = msg.batch
  let podcastEnt = await seneca.entity('pdm/podcast').load$(podcast_id)

  let feed = podcastEnt.feed

  let rssRes = await seneca.shared.getRSS(debug, feed, podcast_id, mark)

  let rss = rssRes.rss
  let feedname = encodeURIComponent(feed.toLowerCase().replace(/^https?:\/\//, ''))

  await seneca.entity('pdm/rss').save$({
    id: 'rss01/' + feedname + '/' + + '-' + batch + '.rss',

We introduce a new parameter, batch, which is a string identifying the current podcast ingestion process. This is a classic “batch” job run once a day, so we create a unique identifier to help track this job (and you thought using LLMs and AI models was sexy! No, production coding is the same old boring thing it’s always been since the 1950’s – batch jobs!).

The table pdm/rss is special – it is a folder in an S3 bucket where we dump the RSS files. It is not special within our code, however, and looks just like any other database table. We do specify the data entity id directly, as this will become the file name in the S3 bucket.

As we discussed in a previous post, using the same abstraction for different persistence mechanisms makes our code much cleaner and easier to refactor. Changing cloud provider, or even just deciding to use an actual database table in future, won’t require any changes to this code. But more importantly, we can unit test our business logic locally without even setting up fake S3 docker containers or other nonsense.

We store the RSS because we will need it for debugging, and we might want to use it in other ways later. Also, it is good to have a record of the state of a given podcast RSS at a given date and time so we can track changes.

We get the individual episode details from the items property of the RSS (we parse it using the rss-parser package) to handle RSS variants. We loop over each episode and emit a process:episode event for each one.

We’re not bothering to check for changes to episodes here. That logic should live in process:episode as that message “understands” episodes.

We also include the episode data in each message. This is a calculated risk. In general, you should keep your messages small, and reference data using an id. But in this case, we can safely assume the RSS content is “reasonable”. If the content is too large, we’ll just let this message fail and then think about whether we even want to solve the problem later.

  let episodes = rss.items

  out.episodes = episodes.length
  episodeEnd = episodes.length

  for (let epI = episodeStart; epI < episodeEnd; epI++) {
    let episode = episodes[epI]
    await handleEpisode(episode, epI)

  async function handleEpisode(episode: any, epI: number) {
    await'aim:ingest,process:episode', {

  out.ok = true

  return out

When deployed, the process:episode message is triggered asynchronously, but we still wait for the message queue to accept it (hence the await). Locally it is just a normal message and we wait for the actual business logic to complete.

Once we’ve sent all the process:episode messages, we’re done. We set ok: true and return a response.

In the next post in this series, we’ll look at the processing of each episode, including (finally!) our first usage of an LLM, which we’ll use to extract additional information from the episode description.

  1. Building a Podcast Chatbot for Voxgig

  2. The Voxgig Podcast Chatbot is Production Code

  3. The Voxgig Podcast Chatbot: First, Subscribe!

  4. This Post

Posted in chatbot, Node.js, senecajs | Leave a comment

The Voxgig Podcast Chatbot: First, Subscribe!

This is the third post in a series I’m writing about a new Minimal Viable Product we’ve released at Voxgig that turns your podcast into a chatbot. Your visitors can now ask your guests questions directly!

The first post is here: Building a Podcast Chatbot for Voxgig and you find the full list at the end of this post.

We want to ingest a podcast. The podcast episodes are described in an RSS feed that also contains the metadata about the podcast. We send a message to the microservices system describing what we want to happen:

  aim:       'ingest',   // The "aim" of the message is the `ingest` service.
  subscribe: 'podcast',  // The instruction to the `ingest` service.
  feed:      ''

The system routes this message to our implementation code (we’ll come back to how that happens later in this series). Since this is a TypeScript code base, our implementation is a TypeScript function:

async function subscribe_podcast(this: any, msg: any, meta: any) {
  let out: any = { ok: false }
  out.why = 'no-code-yet' 
  return out

As a convention, our microservices accept messages that are JSON documents and also respond with messages that are JSON documents. There may not be a response (if the message is an “event”), but if there is, we use the property ok: boolean to indicate the success or failure of the message, and use the why: string property to provide an error reason to the sender.

Why use this convention? You can’t throw exceptions over the network (easily or naturally). But you also don’t want to use exceptions for business logic failures. The system itself hasn’t failed, just some business logic process.

Our initial implementation does nothing except fail, but we can still test it by using the REPL:

boqr/pdm-local> aim:ingest,subscribe:podcast
  ok: false,
  why: 'no-code-yet'

Now we can start to write code to fill out the implementation, which will get hot-reloaded as we go, and we can keep using the REPL to test it. This is what they call Developer Experience folks.

Let’s pause for a minute before writing any more code. We want our system to handle more than one podcast, and we know that we will need to process each podcast episode separately (download the audio, transcribe it, create a vector “embedding” etc.). So in this message action, all we should do is download the RSS, and then send another message to start the actual ingestion process. That way we separate obtaining the podcast RSS feed, from operating on that feed. This will make development and testing easier because once we have the feed, we can run ingestion multiple times without downloading the RSS each time. And trust me, when you build an AI chatbot, you need to rerun your pipeline a lot.

Here is the basic functionality:

import type { RSS } from './ingest-types'

async function subscribe_podcast(this: any, msg: any, meta: any) {
  // The current seneca instance.
  const seneca = this
  let out: any = { ok: false }
  // RSS URL
  let feed = out.feed = '' + msg.feed

  // Processing controls
  let doUpdate = out.doUpdate = !!msg.doUpdate
  let doIngest = out.doIngest = !!msg.doIngest

  // Load the podcast by feed URL too see if we are already subscribed
  let podcastEnt = await seneca.entity('pdm/podcast').load$({ feed })

  // Download the RSS feed if new or updating
  if (null == podcastEnt || doUpdate) {
    let rssRes = await seneca.shared.getRSS(feed)
    let rss = rssRes.rss as RSS

    podcastEnt = podcastEnt || seneca.entity('pdm/podcast').make$()
    podcastEnt = await$({
      title: rss.title,
      desc: rss.description,

  if (null != podcastEnt) {
    out.ok = true
    out.podcast =$(false)

    if (doIngest) {
      await'aim:ingest,ingest:podcast', {
  else {
    out.why = 'podcast-not-found'
  return out

Typescript types: so there’s a lot of any types in this code. There will be a future refactoring to improve this situation. For now, remember that network messages are not function calls, and the messages are validated in other ways.

Yoda conditions: (null == foo) Safer, this is, young Padawan

I’ve removed most of the debugging, tracing, and control code, but what you see above is the real implementation. Let’s unpack what it does.

This message action expects a message that gives us the feed URL in the feed property. But it also looks for optional doUpdate and doIngest boolean properties. These are used to control how far we progress along the ingestion pipeline.

The doUpdate property must be true to download the RSS feed and “update” the podcast.

The doIngest property must be true to send the ingestion message to start ingesting individual episodes.

You can use these properties in the REPL to switch off parts of the pipeline to concentrate on validating and developing the parts you want to work on.

Note that we also add these properties to the out variable and send them back with the response. This makes debugging easier, especially when looking at logs.

New or Existing Podcast?

The first real work happens when we try to load the podcast from our database using the feed URL. If it already exists, we use the existing database row. The Seneca framework has a fundamental design principle: everything is a message. That includes database access (or is there a database at all? Who knows…).

As a convenience, database messages are wrapped in a traditional Object Relational Mapper interface. Seneca also supports name-spacing data entities. I’ll explain more about Seneca’s persistence system as we go, so you’ll have to take things a little on faith at the start.

Let’s try to load a podcast from the database:

let podcastEnt = await seneca.entity('pdm/podcast').load$({ feed })

This is an asynchronous operation (we have to wait for the database), hence we have to await a promise. The entity method creates a new entity object for us, loading data from whatever table or data source is mapped to the pdm/podcast entity. The Seneca entity ORM provides standard methods that all end in $ to avoid clashing with your own database column names. The load$ method takes a query object and returns the first entity that matches the field values in the query.

In this system, the pdm namespace is used for tables relating to our podcast chatbot business logic. We will also make use of Seneca plugins that provide standard functionality (such as user accounts) that use the sys namespace. By using our own namespace, we ensure our own tables never conflict with any of the Seneca plugins we might want to use later. I did say this was production code. This is the sort of thing you have to worry about in production applications.

If the podcast does not exist in our system, or if doUpdate is true, then we proceed to download the RSS feed, set some fields on the pdm/podcast entity, and save it to the database.

Cool. So now, if we do have a podcast (null != podcastEnt), then we can proceed to ingest it if doIngest is true. We send a new message but do not expect a reply. We are emitting the message as an event. Locally, the message gets routed inside our single process monolith. When deployed to AWS, our message will get sent out over a Simple Queue Service topic to whoever is interested in it for further processing. Either way, we do have to wait for the message to be posted.

Notice that the mental model of “it’s all just JSON messages all the way down” means we don’t have to (in this code) think about message routing architectures, microservice design patterns, or what service SDK we need to use. We can concentrate on our own business logic.

Other posts in this series

  1. Building a Podcast Chatbot for Voxgig

  2. The Voxgig Podcast Chatbot is Production Code

  3. This Post

Posted in chatbot, Node.js, senecajs | Leave a comment

The Voxgig Podcast Chatbot is Production Code

This post is part of a series.

This is the second post in a series I’m writing about a new Minimal Viable Product we’ve released at Voxgig that turns your podcast into a chatbot. Your visitors can now ask your guests questions directly!

The first post is here: Building a Podcast Chatbot for Voxgig

In this series, I’ll dive into the code that implements the chatbot. It’s all open source so you can cut and paste to have your own one.

Since it is production-grade code, not just an example for the sake of some “content”, we’ll navigate through the code in baby steps by focusing on specific tasks, rather than trying to go top-to-bottom. Production code has to do a lot of things, so rather than trying to start at the start, we’ll start in a reasonable place so you can see useful code right away.

We have a podcast RSS feed URL, and we want to trigger ingestion of the episodes into the chatbot. Where do we begin?

Well, we have to download and parse the RSS feed. There’s a great RSS parser package that we can use: rss-parser – thank you to Robert Brennan! To get the RSS feed we need one line of code:

await Parser.parseURL(feed)

That returns the contents of the feed (which is XML), as a JSON document so it’s nice and easy to work with. We’ll loop through all the episodes, download the audio, get it transcribed with a speech-to-text service (Deepgram in the first version), and then sprinkle some Retrieval Augmented Generation magic pixie dust over everything to make the chatbot work.

But first, let’s do some software architecture. This is not meant to be a toy. The system deploys to AWS, and uses Lambda Functions, DynamoDB, S3, SQS, and other fun AWS stuff.

Also, the system is a microservice system when deployed, but a local monolith when developing locally. How that all works is something we’ll come back to in later posts.

Since we are downloading RSS feeds, we are effectively an RSS reader, and thus we can have the concept of “subscribing” to a feed in our system. That means we need to have a microservice that can accept an instruction to subscribe to a podcast.

The microservice is called ingest and the message that we send to the microservice to subscribe to a podcast (and optionally trigger ingestion) looks like this:

  aim:       'ingest',   // The "aim" of the message is the `ingest` service.
  subscribe: 'podcast',  // The instruction to the `ingest` service.
  feed:      ''

There are more properties, but we’ll come back to those later. Also, I’m a big fan of the Three Virtues: Laziness, Impatience, and Hubris. The string "aim" is three characters and one syllable, whereas "service" is seven characters and two syllables. Also "service" is a term that is going to be horrendously overloaded and over-used in any codebase. Using short non-technical Anglo-Saxon words to stand for project-specific concepts is a great way to reduce overall confusion in a large code base that you will have to maintain for a long time. My favorite programming tool has always been a thesaurus.

I’m completely ignoring, for now, all questions about how this message, which is a JSON document, gets routed to the ingest microservice. But let’s look at the two main ways that we can send this message.

In code, you can send this message (perhaps as a result of a new user filling out a form with their RSS feed URL) by using the Seneca Framework microservices library:

const result = await{
  aim:       'ingest',
  subscribe: 'podcast',
  feed:      ''

Oh wait, we forgot to be lazy. Let’s try that again:

const result = await'aim:ingest,subscribe:podcast',
  feed: ''

Notice that there is no indication of how this message is transported to the ingest service. No HTTP calling code, no message topic, nothing. That’s important. Because the shortest path to the dreaded distributed monolith is not to use a message abstraction layer.

The other way to send this message, which you’ll see quite a bit in this series, is to use a REPL:

$ npm run repl-dev

> @voxgig/podmind-backend@0.3.3 repl-dev
> seneca-repl aws://lambda/pdm01-backend01-dev-monitor?region=us-east-1

Connected to Seneca: {
  version: '3.34.1',
  id: '6ejf/monitor-pdm01-dev@0.3.3',
  when: 1708685110275
6ejf/monitor-pdm01-dev@0.3.3> aim:ingest,subscribe:podcast,doIngest:false,feed:''
  ok: true,
  why: '',
  batch: 'B2024022310460874',
  mark: 'Ml032jm',
  feed: '',
  doUpdate: false,
  doIngest: false,
  doAudio: false,
  doTranscribe: false,
  episodeStart: 0,
  episodeEnd: -1,
  chunkEnd: -1,
  podcast: Entity {
    'entity$': '-/pdm/podcast',
    feed: '',
    t_mh: 2024020722371877,
    t_m: 1707345438776,
    t_ch: 2024012801013692,
    t_c: 1706403696926,
    id: 'pdfxc5',
    batch: 'B2024020722371839',
    title: 'Fireside with Voxgig',
    desc: '\n' +
      '            This DevRel focused podcast allows entrepreneur, author and coder Richard Rodger introduce you to interesting leaders and experienced professionals in the tech community. Richard and his guests chat not just about their current work or latest trend, but also about their experiences, good and bad, throughout their career. DevRel requires so many different skills and you can come to it from so many routes, that this podcast has featured conference creators, entrepreneurs, open source maintainers, developer advocates and community managers. Join us to learn about just how varied DevRel can be and get ideas to expand your work, impact and community.\n' +
      '        '

I literally just REPL’d into my live system and sent that message so I could cut and paste that example for you.

That doIngest parameter lets me turn off ingestion. No point ingesting a podcast I’m already up-to-date with, just for the sake of an example. The message action, implemented in the ingest service, is synchronous, so I get a response back with the details of the podcast stored in my database.

I can trigger almost any behavior in my system, at any point in the ingestion pipeline, using the REPL. I can do this live on AWS, or I can do it locally (with hot-reloading), so the debugging experience is just lovely.

In the next post, we’ll look at the implementation of the aim:ingest,subscribe:podcast message action in detail. That will set us up to understand the other messages that focus more on the RAG implementation.

But you don’t have to wait for me: ingest source code on github.

Other posts in this series

  1. Building a Podcast Chatbot for Voxgig
  2. This post!
  3. The Voxgig Podcast Chatbot: First, Subscribe!
Posted in chatbot, Node.js, senecajs | Leave a comment

Building a Podcast Chatbot for Voxgig

This post is part of a series.

At Voxgig we have now recorded 140 podcast episodes (as of Feb 2024). That’s a lot of chatting about developer relations. It’s great if you are a regular listener (thanks!), or have the time to listen to our back catalogue. But what is frustrating is that there is so much wonderful knowledge about developer relations locked up in an audio format that, while enjoyable to listen to, is not accessible or usable in an efficient manner.

When you need to know something about developer relations, and you want to tap into the wisdom of one our guests, you are out of luck. Or at least, you were. We decided to turn our coding skills to this little problem and write our own AI chatbot to let you ask our guests your questions directly.

If you go to the Voxgig podcast page, you can now see a little chatbot widget, and when you type in a question, you get a reasonably good answer!

Now this is a very crude initial version, and we are actively improving it. The chatbot uses Retrieval Augmented Generation (RAG) to ingest all our podcast episodes, and let you ask questions of our guests directly.

We decided to open-source the entire project. Building a RAG ingestor and query engine is a natural fit for microservices (there are lots of little asynchronous tasks). So if you’d like your own podcast chatbot, just cut and paste our code:

We’ve had such an enthusiastic reaction to this fun little project we’ve decided to take it further and extend the chatbot to handle all sorts of developer content. We’re also adding the ability to easily experiment with the prompt and the “chunker” (stay tuned if you want to know all about that stuff).

We’re going to “build in public” with this project. You’ll be able to follow along at a source-code level as we make it more and more useful.

To get started, here are the microservices and what each one does:

  • audio – Transcribes audio to dialogue text (using Deepgram).
  • auth – User identity – for the user analytics dashboard.
  • chat – Perform chat queries.
  • chunk – The “chunker” that splits the transcript into chunks for the vector db.
  • embed – Embed the chunks as vectors so you can do similarity searches.
  • entity – General purpose persistent entity operations.
  • ingest – Ingestor to orchestrate the ingestion process.
  • monitor – Monitoring and debugging of the system, provides a REPL.
  • store – Store audio and RSS files.
  • user – User management.
  • widget – The API for the embeddable chat widget (which is a proper web component).

We run this system as a “local monolith”, but deploy it as Lambdas on AWS.

A lot of our development is done in a REPL with hot reloading, so we can alter business logic and experiment on the fly, even with our deployed Lambdas. We’ll get into this in later posts, but here is an example of submitting a question to the live chat service Lambda:

$ seneca-repl aws://lambda/pdm01-backend01-dev-monitor?region=us-east-1

> @voxgig/podmind-backend@0.3.3 repl-dev
> seneca-repl aws://lambda/pdm01-backend01-dev-monitor?region=us-east-1

Connected to Seneca: {
  version: '3.34.1',
  id: 'axjz/monitor-pdm01-dev@0.3.3',
  when: 1708436725968
axjz/monitor-pdm01-dev@0.3.3> aim:chat,chat:query,query:'what is devrel?'
  ok: true,
  why: '',
  answer: "DevRel, short for Developer Relations, is a field within the tech industry that primarily focuses on building and maintaining relationships between companies and developers. DevRel professionals often act as intermediaries between developers and the companies they serve, bridging the gap between technical know-how and the business needs of a product. They help developers understand how a company's product works and are always on the lookout for opportunities to improve the overall developer experience.",
  context: {
    hits: [ ... ]

We’re a TypeScript/JavaScript shop, so that’s what we’re using right now. I would not be surprised if a Python microservice or two appears in the future, but for now, we’ll stick to what we know best. This is the start of a series – I’ll add links to more posts as we go.

Other posts in this series

  1. This post!
  2. The Voxgig Podcast Chatbot is Production Code
  3. The Voxgig Podcast Chatbot: First, Subscribe!
Posted in chatbot, Node.js, senecajs | Leave a comment

Why you should be using a REPL all the way to production

Let’s talk about one of the most productive tools a coder can use: the REPL! The first half of this article gives you a short introduction to the subject if you’re not a coder. If you are a coder, skip ahead to the second half where I talk about using REPLs in production (“with great power comes great responsibility”).

I recently released a long overdue update to the Seneca REPL plugin (A Node.js microservices framework). Since I’m also a fan of fastify (a web framework for Node.js) I couldn’t resist submitting a plugin proposal for a fastify REPL too, since we’ve been having far too much fun with our Seneca REPL, and fastify users are missing out!

Why this renewed focus on a feature I first built years ago? What’s so great about REPLs anyway?

A REPL (Read-Execute-Print-Loop) is a coding tool that lets you interact directly with a running system. The browser console is easily the most used REPL, and essential to the development of any complex web application. You can type in code directly, and see results right away. Rather than waiting for your code to compile, deploy or load, you can make changes immediately. You get direct feedback with no delays that lets you stay focused on the work.

Programming systems provide a REPL as an interactive text interface that lets you write code, usually in a more lenient or shorthand form. You don’t have to formally declare variables. You get access to predefined useful state objects (in the browser, you can access the `document` object, among many others). The essential thing is that you are given access to the live running system, whether that is a web browser, or a local development server, or something else.

Coders use a REPL to debug problems directly, to inspect data to figure out what is going on, to experiment with new code to see what works, to get a feel for a new API (Application Programming Interface), to manipulate data, and even to administer deployed systems.

If a new system doesn’t yet have a REPL, coders will always build one for it. Why does this happen? In more ordinary software development, you write some code, save it, and then run the code, often as part of a test suite. If you’re lucky, your test suite is pretty fast, on the order of seconds. More usually, it can take minutes to run for a production system.

There’s a huge amount of friction in writing or changing some code, then switching to another place to run the code, and then waiting for results. Programming is a lot like the spinning plates performance act. The coder has to spin up multiple mental states and keep them all in their head, so that they can tie the pieces together to solve the problem. Get interrupted while you’re waiting for your tests to complete? Good luck fixing that bug–now you have to rebuild your mental state again, which can easily take a few minutes of focused thought.

As an aside, this is why “collaborative” open plan offices make your developers super unhappy and unproductive–they need to concentrate, not chit chat!

The coding tool that makes unnecessary wait time go away is the REPL. You don’t have to context-switch half as much. The code is run and you see the results right away, so you can stay focused and get the solution. Often you’d be able to copy and paste the rough code out of the REPL into your editor, and fix it up to be production grade.

Not all code is the same. REPLs work best with “business logic” code, and user interface code. When you’re writing code that’s more about computer science algorithms, or core library code, you spend much more time just thinking rather than smashing the keyboard. But when you have a documented business requirement, the work is more about gluing the parts together than inventing new algorithms. What counts is being able to ensure your data has the right structure, and you are calling the right APIs in the right way. A REPL makes this work much easier.

Where (and when) does the idea of a REPL come from? Probably in 1958, from John McCarthy, the inventor of the Lisp language. Interactive evaluation of expressions was an essential feature of Lisp right from the start. If you had had to code using punch cards up to that point, you’d probably be pretty keen on having a REPL too. 

The Smalltalk programming environment took the idea of a REPL to the next level, by making the entire system “live”. You coded by manipulating the state of the live system using an editor that was basically a REPL. Saving your project was the same as serializing the current state.

Although it didn’t really work out, the database query language SQL was supposed to be a way to “talk” to databases using human-like language. But us programmers still get the benefit of an interactive language to manipulate data.

The REPL environment that most non-coders would be familiar with is a spreadsheet. You edit formulas and you see immediate results. This is very gratifying! Programmers enjoy the same serotonin kick. And it gets the work done faster. Enjoy the cheese!

With machine learning finally starting to deliver, many folks are getting exposed to another form of the REPL: the interactive notebook. First seen in the realm of statistical analytics and mathematics, with systems like Mathematica, notebooks such as Jupyter now provide a comfortable interface to complex underlying systems.

And we are not far off the holy grail of REPLs–the fully competent natural language interface provided by the new generation of artificial intelligence LLMs (Large Language Models). Just playing around with a service like ChatGPT will give you an instant feel for why coders are so keen on having a REPL available.

There’s even a marked-for-unicorn status tech startup called that provides software developers with, you guessed it, an online REPL-like environment for coding.

OK, let’s get a bit more technical. If you’re a coder and have used REPLs occasionally but have not found them to be that useful, let’s dive into how you can use them to get some super powers.

First, find a REPL! Here’s a list. The fact that almost every serious programming environment has one should tell you REPLs are a serious tool you need to master.

Let’s talk about what you can do with REPLs and why you would want one. I’ll start with the ordinary stuff and work my way up to the definitely controversial radioactive-spider use cases

You write some code. You compile it (for most languages). You run your unit tests. Or you run a small script to check things. Or you start up your system and then manually interact with it to check if your code works.

This cycle takes time. You spend a lot of time just waiting for compilation to complete, or the unit testing framework to boot up and then run your tests. Each individual delay might take a few seconds, but it all adds up. Pretty soon you’re on Twitter or Youtube or Discord and you lose twenty minutes before you drag yourself back to work. There are decades worth of articles all over the web about the productivity horrors of “context switching” and how it kills your concentration. You work best as a developer when you can get into that magical “flow state” where the code just pours out of your brain.

You know what makes flow state easy to maintain? A REPL! You write, you see results. Right here, right now. If that REPL is exposed by a local version of the system you’re working on, it gets even better. You stay in flow working on your system. Out pours the code!

Now let’s take it to the next level. If you rig up some sort of hot-reloading into your system (there’s always a way to do this in your language of choice), you can debug in flow state without those restart wait times that break your concentration. Instant, instant feedback. That’s the drug we need.

You run your code in the REPL. It’s broken. Alt-Tab back to your editor, make a change. Alt-Tab back to the REPL. Hit the up-arrow key, hit return, see results. Still broken? Repeat. Fixed? Alt-Tab, run the unit tests and bask in the glow of green. Go grab a coffee. You’re a 10X dev.

The feeling of intellectual reward you get from not wasting any time waiting for the machine is not something you’ll ever want to give up on again once you’ve tasted it.

And up another level. Take your REPL and start adding custom commands and shortcuts. Your command history will have lots of code that you keep using again and again. Turn that repeated code into shortcuts. Apart from the righteous satisfaction every dev gets from building their own tools, you just made yourself and your team a lot faster and a lot happier.

At this point it’s time to invoke the eternal three virtues of programming:

According to Larry Wall, the original author of the Perl programming language, there are three great virtues of a programmer; Laziness, Impatience and Hubris

  1. Laziness: The quality that makes you go to great effort to reduce overall energy expenditure. It makes you write labor-saving programs that other people will find useful and document what you wrote so you don’t have to answer so many questions about it.
  2. Impatience: The anger you feel when the computer is being lazy. This makes you write programs that don’t just react to your needs, but actually anticipate them. Or at least pretend to.
  3. Hubris: The quality that makes you write (and maintain) programs that other people won’t want to say bad things about.

[“Programming Perl”, 2nd Edition, O’Reilly & Associates, 1996]

Nothing is more faithful to the Three Virtues than a REPL (or perhaps Perl, or Raku nowadays, I guess).

Your shortcuts can go beyond mere abbreviations. All large systems define conceptual abstractions for the system itself. How do you represent data? Operations on that data? Components? User interactions? API requests? All these things and more can be “reified” in the REPL. Given form and substance, so that you can use them directly.

Here’s an example. Let’s say you use an Object-Relation-Mapper to talk to your database. Maybe your code looks like this:


let alice = new User({ name: “Alice”, role: “admin” })


You could type that into the REPL directly. Or you could lift up (“reify”) the “idea” of data entities into a system of shortcuts:


> save user {name:alice,role:admin}


Here’s another example. Let’s say you’re working on a REST API. Sure you’ve got curl and postman to help with testing and debugging, but again, they do require a context-switch. Not much, but it adds up. Instead, use the REPL:


> GET /api/users

[ …list of user objects… ]


Once you start to build up a set of shortcuts, they become a vernacular that makes you and your team very happy developers. And fast.

Where can you go next? I’m sure you’ll be very familiar with code working perfectly on your local machine, and breaking once deployed to a build server, or to staging or production. This is normal. No amount of testing will save you. Indeed the purpose of build or staging servers is to find exactly these fracture points where different environments break your code.

That’s nice. What isn’t nice is the pain of debugging code that is broken on staging. Welcome to log file hell. We won’t wade into the Big Fight about print-versus-debugger debugging, but when it comes to staging servers, all you often get is a log file.

So you add some print statements and you redeploy. Then you wait for the remote system to churn. Then you load up your log files (probably in a clunky web interface), and try to make sense of the output, most of which is utterly irrelevant. You thought flow state was hard to maintain for local dev work? There ain’t no flow state here. This work is grueling and slow and very not fun.

Wouldn’t you like a REPL into staging? With a REPL you just look at things directly and muck about debugging right on the system, with the staging data. This is the way to debug! And if your REPL also includes some commands for data querying and modification, then you don’t even need to Alt-Tab over the database interface. Bugs and issues on staging can be resolved in nearly the same way, and nearly as fast as local issues. Happy happy developer!

When you implement a REPL to give you access to staging, you may not always just be able to expose an open network port, which is the normal mode of connection to a REPL. For things like serverless functions, you’ll need to co-opt the cloud infrastructure to let you submit commands and get responses. Use things like AWS Lambda invocation to get your commands up into the cloud. What matters is the developer experience of an interactive terminal. What happens behind the curtain is an implementation detail. 

At this point you will need to start thinking more seriously about access control. Staging servers, build servers, and test servers are shared infrastructure. Who gets REPL access and what can they do? That has to be context dependent, but here’s some things that have worked well for me.

If your system has a concept of user accounts with permissions, reuse that abstraction. Each REPL is operated as a user in the system. You’ll need to add some way to login or provide an access token. If you have an external API that provides this already, you’re nearly home and dry.

If you don’t have a concept of a user, then you’ll need to at least provide a set of access levels, and restrictions for those levels. This is no different from other kinds of infrastructure work that you do to make your systems easy to monitor and maintain.

You will also want to control access at a high level such as network layers and application domains. This is again very much context dependent, but things like kubectl take you most of the way there.

And here’s the big one: TURN OFF THE REPL IN PRODUCTION. You really don’t want an active REPL to go live (well…unless you do–see below!). The safest way to do this is to treat the REPL as being under feature flag control, switched off by default. Add some failsafes by also checking general environment characteristics, like system variables. A REPL falls into the same bucket as system monitoring and control, so you can apply the same DevOps control principles. 

And finally, you can run a REPL in production. It’s not as crazy as it sounds. A REPL lets you resolve emergency production issues. It gives you a fine-grained administration interface. It lets you perform maintenance you have not yet automated. These are all massive time savers, especially in the early days of a startup, before you have full systems in place.

Access to the production REPL should be very tightly controlled. And I like to add in additional layers of security, such as time-based one-time password verification. You could even start using SSL certs if you’re really paranoid.

I have found that the operational benefits for power users are worth the risk.

There is one important thing you’ll need to add to your REPL implementation if you go all the way to production. An audit log. You’ll need to keep a log of all the commands, and who ran them. And you’ll need to add some scrubbing logic to remove sensitive data. Your REPL is now a serious systems management tool, so will require more effort to maintain.

I’ve been using REPLs in all these ways for more than a decade. A REPL makes me faster and happier as a developer. It will make you faster and happier too – use a REPL!

Posted in Uncategorized | Leave a comment

@seneca/repl Version 6 Released!

I’ve released a substantial update to the @seneca/repl plugin!

The @seneca/repl plugin provides a REPL for the Seneca microservices framework. As one of the earliest plugins, it has proven to be incredibly useful. A REPL (Read-Execute-Print-Loop) offers an interactive space to write code and execute it instantly. If you’ve ever used the browser console or run the command node in Node.js, you’ve used a REPL.

To learn more about the plan for this release, refer to my previous post: @seneca/repl version 2.x plan (Yes, I did say 2.x – that was a brain glitch!). Read that post to understand what the Seneca REPL can do for you.

New Feature: Entity Commands

The Seneca REPL allows you to send messages directly to a running Seneca instance, just by entering the JSON source of the message. In fact, it’s even simpler than that, as the REPL accepts a relaxed form of JSON called Jsonic, which lets you avoid most strict JSON syntax rules.

Here’s an example of using the REPL to get the status of a Seneca instance:

$ seneca-repl
> role:seneca,stats:true
  start: '2023-08-01T19:23:23.274Z',
  act: { calls: 105, done: 104, fails: 0, cache: 0 },
  actmap: undefined,
  now: '2023-08-01T19:29:00.108Z',
  uptime: 336834

In this interaction, the full JSON message was submitted:

{ "role":"seneca", "stats":true }

This is equivalent to the Seneca API call:

seneca.act({ "role":"seneca", "stats":true })

Also, since Seneca accepts Jsonic too:


When working with Seneca data entities that provide a simple ORM to access your database, you can interact with them using standard Seneca messages. For example:

seneca.entity('foo').list$() // lists all rows in the table "foo"

In the REPL, this would be the equivalent message:

> sys:entity,cmd:load,name:foo

The REPL itself also provides a Seneca instance, allowing you to write standard Seneca code.

However, adhering to Larry Wall’s programming virtues: Laziness, Impatience, and Hubris, I’ve introduced a REPL shortcut for data entities, as they are empirically the most common use case for the REPL.

Each entity operation (list$, load$, save$, remove$) gets its own REPL command, all following the same syntax:


Here are some examples:

> list$ sys/user
> list$ sys/user group:foo
> save$ sys/user id:aaa,group:bar
> load$ sys/user aaa

The REPL commands provide various functions to manage data entities, and details and examples can be found in the article.

New Feature: Auto Reconnection

To enhance developer experience, the REPL client now automatically reconnects to the server if disconnected, using a backoff mechanism. You can also hit the return key to reconnect instantly.

New Feature: REPL Tunnels

Configuring REPL tunnels has been simplified, and it’s now possible to drive the REPL using Seneca messages. We’ve introduced HTTP and AWS Lambda tunnels, and detailed instructions are provided in the article.

WARNING: This feature is a security risk! Don’t expose your production systems without additional access controls.

New Feature: Isolated History

This release improves command history storage, now saved locally in a hidden .seneca folder in your home folder. It keeps a separate history for each unique connection URL, and histories are not truncated.

$ seneca-repl localhost?project=foo # unique command history for the "foo" project
$ seneca-repl localhost?project=bar # unique command history for the "bar" project


With this new, eagerly anticipated release of the @seneca/repl plugin, there are many features to explore and enjoy. As it’s open-source, you can find it on GitHub, where you’re welcome to submit bugs, issues, and feature requests. Enjoy!

Posted in Uncategorized | Leave a comment

@seneca/repl version 2.x plan

I’m updating the @seneca/repl plugin! Here is the basic plan.

NOTE: There’s a @seneca/repl dev Github Project to track this work.

The @seneca/repl plugin provides a REPL for the seneca microservices framework. It is one of the earliest plugins, and has proven to be one of the most useful. A REPL (Read-Execute-Print-Loop) is an interactive space to write code and get it executed right away. If you’ve used the browser console, you’ve used a REPL. With Node.js, you also get a REPL, just run the command node by itself and off you go!

The big thing about a REPL is the speed boost it gives your development process. You just type stuff in and it works right away. You can directly inspect objects to see what they are made of. Debugging is much easier.

The Seneca REPL provides you with the standard Node.js REPL — you can execute arbitrary JavaScript. But it also provides you with access to the root Seneca instance, and with shortcuts to post Seneca messages, and examine the running Seneca system.

For example, if you have a message foo:bar,zed:qaz, then you can post that message directly in the REPL:

 > foo:bar,zed:qaz 

The REPL accepts any valid JSON (or the equivalent Jsonic form of abbreviated JSON) as an input and attempts to post the input as a message, printing the result. Combine this with hot-reloading from the @seneca/reload plugin and you have a lovely little high speed local development environment for your microservice system.

An update for @seneca/repl is long overdue. I’ve created a Github project to track the main tasks. The most important new feature is the ability to interact with the REPL when your microservice is running in a serverless context.

A traditional REPL exposes a network port and you cannot do that over the network. This is fine for local development, but it is not supported in a serverless context. However most serverless environments provide an invocation mechanism so you can send messages to your deployed function. I’m extending @seneca/repl so that it can support this interaction pathway by providing a special message: sys:repl,send:cmd that can run REPL commands and collect their output.

To implement this some refactoring is required. The old codebase is pretty old. As in, callback-hell old. It also assumes a network socket. So this all has to be pulled apart and abstracted a little. The code is very much streams-based, and that also makes it fun, as the streams have to be marshalled to operate via a request/response interaction.

One issue in the existing code is the lack of a delimiter for the command result. It all sort of works by accident! I’m going to use NUL as the delimiter to mark the end of command response text. This should also clear up some bizarro bugs.

The other new feature is more direct support for Seneca entities with an extended set of special commands: list$, save$, load$, etc. that mirror the Seneca entity operations. This is a pretty big use case and we’ve been putting up with the kludgy workaround of using entity messages directly for … too many years, sigh.

On the command-line side, the REPL client needs to be extended to perform serverless invocations. A stream overlay will be used for this to preserve streams as the basic abstraction for REPL communication.

The other tasks are housekeeping to move to the new Seneca standard test runner, Jest, and convert the codebase to Typescript.

Once this release is out, the @seneca/gateway plugins will need to be updated to handle security more generally when exposing a REPL. At the moment we tend to use a monitor serverless function that has no external exposure, and can only be called by invocation with appropriate credentials. This monitor function also uses our little trick of embedding all the other microservices as a modular monolith so that you can use the REPL to post any message. While this is mostly sufficient in practice, it would be nice to also be able to invoke any function directly via the REPL.

Posted in Node.js, senecajs | 1 Comment

The Tao of Microservices


My book on the microservice architecture is in early release! This book is based on five years of building microservice systems, of all shapes and sizes. It is a comprehensive guide to using the architecture based on practical experience. I’ve made many, many mistakes along the way – you don’t have to!

The number one reason to use microservices is that they put the fun back into building software systems. Forget about all the serious reasons for the moment. As a matter of personal experience, the microservice architecture is just so much more enjoyable to work with. I’ve been building systems this way since about 2011. Systems big and small. I’ve tried most of the permutations. More often than not, it has all worked out wonderfully well. Each time things get better. A good idea should get better the harder you work it. I’ve worked microservices pretty hard, and I’m still working them, and I’m still coming back for more. The closer you look, the better they get. And they make you feel wonderful.

I’ve decided to write another book, despite swearing that I would never write one again. Writing is the most painful thing you can do to your brain. It forces you to think clearly! A most uncomfortable experience. The wonderful thing about putting your ideas on paper is that it starts a conversation. There is no question that a great deal of work remains to be done to refine the microservice idea. That only happens when we apply our collective intelligence to the problem. This book is a field report from the front lines of the microservice architecture. It captures an approach that has evolved from practical iteration.

At the end of 2011 I co-founded a software consultancy. I had finally had enough of the startup merry-go-round, and decided to try a different business model. As it turns out, it was a hidden passion, and we have had great success and grown quickly. Part of that success is due to our adoption of the microservice architecture (the other part is our participation in the open source community). Microservices let us deliver quickly. Fact. If you want empirical evidence that microservices actually work in practice, is it. The great thing about software consultancy is that you get to work with so many people and companies, and so many different technical challenges, from greenfield projects to massive legacy integrations. It’s always interesting to hear how Netflix, or Uber, or Amazon are using microservices, and they have very much moved the needle (thank you guys!). But there is still the criticism that these are unicorns, special cases, and that you need the vast resources and teams of rock star coders to make it actually work in practice. The great thing about consultancy is that you can gain such a wide range of experience that you can really see how software development plays out in many different contexts. Microservices can work for any team, right from the word go.

So why are they so much fun? Because they are little building blocks that snap together easily and let you build big systems without melting your brain. So much of the pain of software development comes from the mental effort required to keep lots of spinning plates in the air. Our language platforms have too many features, and too much power. This power is great, a300px-Debris-GEO1280nd fun to code, when the code-base is small. As soon as it reaches enterprise-grade applications, with massive code-bases, it becomes very not fun. This is called technical debt. It sucks. Technical debt is the reason for lost weekends, death marches, and failed projects. Microservices are the antidote to technical debt. They make it very hard to take out that loan. They are small, so you can’t shoot yourself in the foot.

Microservices work because they give you a scalable component model based on the principle of additivity. Need a new feature? Write a new microservice! The immediate criticism is that with so many moving parts, this is a nightmare to deploy, and impossible to understand. Well certainly, if you think microservices are just a Service-Oriented Architecture, with more and smaller services, then you’d be dead right. That would be a disaster. The key to making it work is to stop obsessing about the services, and turn to the messages between services. The messages can be catalogued. They form a domain language. They can be directly connected to the business requirements. The desired behavior of the messages, and their interactions, can be declaratively defined (pattern-matching is a good approach). Once you have the list of messages, you group them into services. The number of service instances, and the nature of their deployment, and even the transport mechanisms that you use to move messages, are all ultimately infrastructural issues that can be solved with deployment automation and management (you should be doing this anyway, even for monoliths!). They are not fundamental weaknesses, no more than the need to compile a high-level language into machine code is a weakness – it is just a practicality.

The book has two key aims. First, to give you the practical and theoretical tools to design, build and deploy microservice architectures that work, and that give you the benefits of rapid development, flexibility in the face of changing requirements, and continuous delivery. Second, you need to understand the trade-offs of the architecture, so that you can understand the advantages and disadvantages of choosing to use microservices. Your own situation is always relevant. It is your job to make the decision. The problem with all the noise, all the blog posts for and against, is that it is very hard to get a clear understanding of the consequences of your decisions as a software developer. This book is a field report – a concise summary of my five year journey, and the place where I am now. Compress my five years in 5 hours of reading, and make your own decision about microservices.

Posted in Uncategorized | 6 Comments

Seneca, A Microservices framework for Node.js

The release of Seneca 1.0 represents 5 years of open source evolution, and not a little blood, sweat and tears. The thing I am most happy about is the fact that I did not do the release – Wyatt had that honor, with Dean and Matteo keeping him honest! Seneca is now a community, not just one developer’s itch. And if you ask me about the future, the first priority is the care and feeding of the community. Building the rules of conduct, guiding principles, decision making processes, great documentation, and all the other stuff that isn’t code. We want to be good open source citizens.

Microservices for Node.js

So you want to write microservices in Node.js? Seneca’s job is to make your life easier. The funny thing is, Seneca did not start out as a microservices framework at all. It was a framework for building Minimum Viable Products. To build an MVP, you need to be able to plug together pieces of code quickly. You should be able to list a set of basic functionalities, such as user accounts, database connectors, content delivery, administration backends, etc, and get a web application that “just works”. You then extend and enhance to add your own secret sauce.

I really liked the way that Rails (for Ruby) and Django (for Python) had ecosystems of “business logic” components that you could (almost!) just plug in. But having built systems with both platforms, the reality on the ground was a little different. The promise was that, unlike, say Java (where I spent far too many years building “enterprise” systems), Rails or Django would let you develop an MVP very quickly. This was only half true. Certainly, if you stuck to the rules, and mostly followed the Model-View-Controller style, you could get pretty far. But the component systems of these platforms always ended up creating technical debt. There were just too many integration hooks, too much opportunity for complexity to creep in. The underlying problem was that neither system had any fundamental structural model to unify the component architecture. It was all special cases, neat tricks, and monkey patching.

Software components

What are software components anyway? They are pieces of functionality that you can glue together. And, by glue, we mean compose. And yes, composability is why people get all hot and bothered about category theory and monads and all that jazz. Back in the real world, the ability to compose software components together is the essence of their value. The thing people love about UNIX command line tools is that you can pipe them together using simple streams of data. It’s a simple component model that works really well. Other component models have the same goal, but don’t quite get there. Take object-oriented programming. Objects are meant to be components. And yet getting objects to work together is … rather awkward. I never fail to be struck by the irony that despite the supposed power of inheritance, interfaces, polymorphism, and such, you still need a book of spells design patterns to “code proper”. Seems like all that power just lets you make a bigger mess.

Functional programming is a quite a bit better, mostly because functions are more uniform, and thus easier to compose. And many functional programming languages have pattern matching. Now pattern matching is terribly simple, like UNIX pipes, but also terribly powerful, also like UNIX pipes. The pattern of your input determines which function to call. And handling special cases is easy – just add more functions for more specific patterns. The nice thing about this approach is that the general cases can remain general, with simple data structures and logic. Pattern matching is a powerful way to fight technical debt. And it’s useful by itself, so you don’t have to go functional if you don’t want to.

The Genesis of Seneca

By 2010 I had finally become quite allergic to Java. I had tried Ruby and Python, and their primary frameworks, and found them wanting. And then I came across this little science experiment called Node.js. My first reaction, like that of many, was … JavaScript? srsly? But then I remembered that the coder, Douglas Crockford, had written a book about JavaScript. I read that book, JavaScript, the Good Parts, and felt better. And a toy language became how I fed myself and my family.

At the time I was heavily involved in the mobile web and HTML5 worlds, and quite convinced that native mobile apps were on the way out (oh yeah … real soon now). I had even helped build a startup, (since acquired by RedHat), based on the idea. But then life took a different course. A new baby, our third, and an offer of a book deal, and a desire to return to freelancing, and the promise of much higher productivity with Node.js, combined to push me into independence once again. I was back writing code, and it was fun!

Just a small hitch. The Node.js module system,, is fantastic, and there are many many great modules. Most of them are infrastructural in nature – utilities. Yes, they are software components, and yes, the Node.js module system is also a pretty good software component model. But, it still suffers from the complexity inherent in the underlying JavaScript language. Node modules are sort of composable, and there are good examples, like hapi, or streams, but there was still no easy way to componentize “business logic”. As a freelancer I lived or died by my ability to deliver features. Rewriting the business logic for user account management, or for shopping carts, or payment integrations, or content management, was killing my margins. I decided to build a component model based on pattern matching.

The model is really simple. Components are nothing more than a set of inbound and outbound messages. They are entirely defined by the messages they accept, and the messages they emit. We say nothing about internal data structures, or even causality between messages. A component is fully specified by these two lists of messages.

Let’s say we are writing a little blogging engine. You can post entries. Entries have a title and body text. So you have post-entry message, and it contains the title and body data. Now you have to answer the questions:

  • Who sends this post-entry message?
  • Who receives it? Does more than one component receive it?
  • What is an “entry” anyway?
  • And what is a message type? What type is post-entry?

And this is just inside the same process. I was not even thinking about distributed systems of microservices at this stage. All I knew was, encoding the messages as method calls on an object was not the way to go – that leads to the same old madness.

The key question is, what is a message type? It’s a hard one to answer. You end up going down the road of schema validation, contracts, and other such nastiness. One way to answer hard questions is not to answer them at all – a common trick among mathematicians. Do parallel lines ever meet? Decide that the question is unanswerable and you get whole new geometries! Pattern matching lets you side-step the question of message types. Here’s how it works:

Let’s say the post-entry message looks something like:

  "title": "Down with this sort of thing!",
  "body": "Careful now!"

What if we just send it to all components? Let each component decide if the message is important. That avoids the question of message routing. Still, it’s hard to recognize the messages you care about, so let’s make it a little easier. Let’s tag the message with a fixed property:

  "post": "entry",
  "title": "Down with this sort of thing!",
  "body": "Careful now!"

Now any components that care about the posting of entries can pattern match messages by looking for a top level post:entry property and value pair. Let’s say we have at least a PostEntry component that handles posted entries – perhaps it saves them to a database.

There are some nice consequences to pattern matching. Components that emit this message do not need to know about components that consume it. Nor do components that consume the message need to know about the emitting components. That’s decoupling, right there! Messages do not need to be addressed to anybody. The need to have a target for messages is the downfall of many a component architecture. To call a method, you need an object instance, and we avoid that need with patterns. Another consequence: any number of components or component instances can react to the message. That’s not an architectural decision you have to make in advance.

Isn’t this just an event-driven architecture? No. Events are sent and received from topics, and topics are pretty much equivalent to addresses. You have to know the topic name on the sending side.

Isn’t the tag just a backdoor type? On a theoretical level, probably! On a practical level, not really. It doesn’t impose that same constraints that types do, nor provide any ability to validate correctness. Nor does it impose a schema. This approach to component communication is very much in the school of Postel’s Law: “be strict in what you emit, liberal in what you accept”. And the label that we are using for this message, post-entry, is not a type, just an informal name.

Practical Pattern Matching

Messages do have to make it from one component to another eventually. But the mapping from patterns to components does not reside in the component themselves. You can put that information in a separate place, and implement the mapping separately, and in many different ways. I wrote a little pattern-matching engine to do this work: patrun. The way that you “wire” up components is thus independent, declarative, simple to understand, and yet dynamically configurable.

Now let’s kick it up a gear. Let’s add a feature to our system. Blog posts can contain an image! Woohoo! In a traditional software architecture, you’d have to modify your system to support this new feature. You’d have to extend your data models, create sub-classes, update data schemas, change method signatures, update unit tests, and so on, and so forth. No wonder software projects are always late and over-budget.

Stepping back for a minute, the post-entry messages now look like this:

  "post": "entry",
  "title": "Down with this sort of thing!",
  "body": "Careful now!"
  "image": "" // OPTIONAL!

Sometimes the message has an image property, sometimes it doesn’t. Does this break anything? The original PostEntry component that handled post-entry messages still works! It just ignores the extra image property – it means nothing.

Now, add a new component to the system that can handle entries with images: let’s call it PostImageEntry. Any time PostImageEntry sees a message that contains both post:entry, and an image property, then it has a match, and it acts on the message.

There’s an obvious problem. The original PostEntry component is also going to act on the same message, which is not what you want. There’s an easy solution. Add a rule that more specific matches win. PostImageEntry matches more properties than PostEntry, so it wins. The nice thing is, you never had to change the code of the original PostEntry to make this work. All you did was add new code. Not needing to modify old code removes entire classes of potential bugs.

The “more specific matches win” rule gives you extensible components. Every time you have a new feature or a special case, match against the property of the message that makes it special. You end up with a set of components where the ones written early in the project are more general, and the ones written later are more specific, and at no point did you ever have to refactor.

It gets better. Older components that are “a bit wrong” and no longer relevant – they’re disposable! Throw them away and rewrite better components. The consequences are local, not global, so rewriting is cheap and safe.

What about composability? Well, let’s say one of your clients is a strict libertarian, and believes all forms of censorship are evil, but another client is deeply traditional and simply won’t tolerate any foul language on their blogging site. Where do you add logic to deal with this?

Try this: write a NicePostEntry component. It checks for foul language, and replaces any objectionable words in the body property with the string “BEEP!”. The NicePostEntry component matches the pattern post:entry, and so captures all post-entry messages. Again we have the problem that this conflicts with our existing PostEntry. The solution is to allow pattern overrides.

We allow NicePostEntry to override the post:entry pattern. But we also give NicePostEntry a reference to the prior component that was attached to that pattern. NicePostEntry can then modify the message as it sees fit, and pass it on to the prior component. This is composition! As an abuse of syntax, we can say, with respect to the pattern post:entry, messages are processed as

PostEntry( NicePostEntry( message ) )

What about post-entry messages with images? Since we have a separate pattern matching engine, we set up our rules to handle both cases:

post:entry, image: undefined -> PostEntry( NicePostEntry( message ) )
post:entry, image: defined -> PostImageEntry( NicePostEntry( message ) )

Business Logic Components

This simple little model gives you pretty much all you need for handling the ever-changing requirements of “business logic”. It works because you don’t need to design a data model in advance, you don’t need to design an object model in advance, and you don’t even need to design message schemas in advance. You start with your best guess of the simpler messages in the system, and you know you have a get-out-of-jail: new features can be handled with new properties, and they won’t break old features.

If you think about it, there is quite a direct path from informal business requirements, to “things that happen” in the system, to messages between components. It’s quite easy to specify the system in terms of messages. In fact, you don’t really need to worry about deciding which components to build up front. You can group messages into natural components as you go, or split them out into separate components if the components get too complex.

And this gets you to the point where you can write very general components that handle all sorts of common application features, in a very generic way, and then enhance and compose as needed for needs of an individual project. If you look at the plugin page for Seneca, you’ll see there are plugins (software components) for all sorts of things. They all communicate using pattern matching on messages, and so are resilient to versioning issues, allow for alternative implementations, and most importantly, allow the community to build new plugins, for new features, without any “command and control” nonsense. Anyone can write any old Seneca plugin, any old way they like. Of course, there are some conventions, and we do maintain a curated list of well-behaved plugins on the Seneca site. Still, in your own projects, you’re pretty much free to do whatever you like – it’s all just messages at the end of the day.

By the time Seneca had become a useful component system in late 2011, I co-founded nearForm with Cian O’Maidín. We saw the potential in Node.js and decided we wanted to be part of something big. Seneca became a vital part of our ability to delivery quickly and effectively for clients. We’re based in Ireland, so not only are most of our clients remote, most of our developers are also remote. The ability to separate development work into well-defined components, with interfaces specified by message patterns, along with a body of plug-and-play business components, allowed us to excel at delivery, and is one of the cornerstones of our success in software professional services. We did hit one major snag though, and it illustrates an important trade-off and limitation of this approach (and you thought this was all rainbows and unicorns, oh no…).

Data Modeling

The problem was data, specifically, data models. How do you map the classical idea of a data schema onto a system with no types, and arbitrary messages? Our first instinct was to hide all data manipulation inside each component, and treat messages as extracts of relevant data only. This worked, but was not entirely comfortable. You’ll notice a similar problem in microservice architectures. If one microservice “owns” all the data for a given entity, say users, then how do other pieces of business logic in other microservices access and manipulate that data?

At the time we were blissfully unaware of Domain Driven Design, and still rather enamored with the ActiveRecord design pattern. We did have a problem to solve. Components needed a common data model to facilitate interactions, and we also wanted to be database independent (in consulting, especially for large clients, you don’t always get to choose the database).

We decided to model data using a set of standard messages patterns corresponding (almost) to the basic Create-Read-Update-Delete data operations. Seneca thus offers a conventional set of message patterns of the form role:entity, cmd:save|load|remove|list that operate on “data entities”. Pattern matching makes it easy to support optional namespaces, so that you can have “system” entities for well established plugins (say for user accounts), and even support things like multi-tenancy. Because all data entity operations reduce to messages, it’s easy to get fancy, and use different databases for different kinds of data, whilst retaining the same API. That’s cool. It lets you do things like switch database mid-project without much pain. Start with MongoDB because in the early days your schema is unstable, and end with Postgres, because the client insists on a relational database for production.

You can use pattern composition to add things like data validation and manipulation, access controls, caching, and custom business rules (this is equivalent to adding custom methods to an ActiveRecord). This is all very nice, and works really well in real-world projects. We’re still in business after all! For more details, the Seneca data entity tutorial has you covered.

So what’s the catch? The trade-off is that you have a lowest-common denominator data model. You get what is essentially a key-value store, but with reasonable, if limited, query capabilities. You certainly don’t get to write SQL, or have any concept of relations. You don’t get table joins. You have to accept denormalization.

Now, on the other hand, one can argue that a simplified data model gives you better scalability and performance, and also forces you to face up to data consistency choices that you should be making. The days of hiding behind “transactions” are gone, especially with the number of users we have to deal with.

The way that Seneca handles data will be expanding. We will certainly retain our simplified model, and use that as the basis for core components. It works, and it works pretty well, but we won’t hide the choices that it entails either. Luckily, the message model allows us, and you, to enhance what’s already there, and push forward. One of our core values is respect for developers that have chosen to use the framework, and that means you’ll never suffer from global thermonuclear version breakage. We’ll keep your old code running. Backwards compatibility is in our blood. You might have to switch on a flag or add a supporting plugin, but we’ll never ask you to refactor.


Oh yeah … those. So we invited Fred George to speak at one of our Node.js meetups in Dublin in 2013, about “Programmer Anarchy”, and he pretty much melted our brains. We had discovered microservices, and we loved the idea. Yes, lots of practical problems, like deployment and configuration, and network complexity – not a free lunch by any means. But very tasty, and worth paying for!

We did have a little secret weapon – Seneca. Microservices are really just independently deployable and scalable software components. But how do they communicate? Well, we had already solved that problem! Pattern matching. All we need to know was figure out the networking piece.

To preserve the simple view of the world that we had created, it was obvious that microservices should not know about each other, in any way. Microservices based on web services offering REST interfaces suffer from the problem of addressing – where does the microservice live? You need to know the network address of the other side.

Now, you can solve this problem in many different ways – service registries, proxies, virtual network overlays, message buses, and combinations thereof. The problem with most approaches is that your microservice code is still closely bound to the transport mechanism. For example, say you decide to use redis, because you like the publish-subscribe pattern. Well if you use a redis library directly, then it’s going to be hard to move to Kafka when you need to scale. Sure you can write an abstraction layer, but that’s more work again. Alternatively, you could use system designed exactly for the microservice architecture – Akka, say. That does tend tie you down to a particular language platform (Yes, Seneca is Node.js, but the messages are JSON, so polyglot services are much easier than a custom protocol)

We decided to adopt the strategy of transport independence. Microservices should neither know nor care how messages arrive or are sent. That is configuration information, and should not require changes to the business logic code. The pattern matching message architecture made it very easy to make this work. Provide a transport plugin that matches the outbound message patterns. The transport plugin sends these messages out onto the network. The transport plugin can also accept messages from the network and submit them to local plugins. From the perspective of all other Seneca plugins, nothing has changed. The transport plugin is just another plugin.

We converted Seneca into a microservices framework, with no code changes. We just wrote some new plugins. Of course, later we added API conveniences, and things like message correlation identifiers, but even now Seneca is a microservices platform built entirely from plugins. That means you’re not stuck with our opinion on microservices. You can easily write your own transports.

DANGER: You can’t allow yourself to think that all messages are local, and the network is “hidden”, that’s a network fallacy. Instead, adopt the mindset that all messages are distributed, and thus subject to failure on the network. In the era of cloud computing, that’s probably going to end up being true for your system anyway.

The plugin approach to message transport gives you a very flexible structure for your microservices. You write your own code in a normal Node.js module, which you can unit test in the normal way. You put your module into a Seneca plugin, and expose it’s functionality via a set of message patterns (or for simple cases, just write a plugin directly). Then you write a separate execution script, to run your microservice. The execution script “wires” up the microservice to the rest of the network. It handles the configuration of the microservice, including details like network configuration. Just as with Seneca data entities, you can change your microservice communication strategy from HTTP REST to a RabbitMQ message queue, without any changes to your business logic code. Just write a new execution script – it’s only a couple of lines of configuration code.

To see actual code, and try this out for yourself, try the NodeZoo workshop.

Service Discovery

For the transport plugins, we started as simply as we could. In fact, the basic transport is pretty much just point-to-point HTTP REST, and you do need to provide an address – the IP and port of the remote service. But this is OK – your business logic never needs to know, and can be written under the useful fiction that it can send and receive any message, and it will still “just work”.

This approach has another useful feature – testing and mocking is easy. Simply provide stub implementations of the message patterns that your microservice expects. No need for the laborious re-construction of the object hierarchies of third party libraries. Testing reduces to verifying that inbound and outbound messages have the expected behavior and content. Much simpler than testing the nooks and crannies of all the weird and wonderful APIs you can construct just with normal language features in JavaScript.

Despite these advantages, service discovery had remained an awkward practicality, until recently that is. The problem is that you still have to get the network location of the other services, or at least the port numbers if using a proxy, or the location of the message bus, or use a central service registry, or set up fancy virtual DNS, or find some other way to get location information to the right place. We used all of these strategies, and more, to mitigate the problems that this issue causes in production, and also for local development.

But the pressure was mounting from our user community. Everybody wants a free lunch in the end. So we started to experiment with mesh networking. Microservices should configure each other, and share information directly with each other, in a decentralized way. The problem with most of the current service discovery mechanisms is that they use a central point of control to manage the microservice system. Consider the drawbacks. The central registry can easily get out of date. Services come and go as they fail and restart, or as the system scales up or down. The registry may not know about all of the healthy services, and may direct clients to use unhealthy services. Detecting unhealthy services has to be done by heart-beating, but that is vulnerable to slow failures, where the service, under load, may just be taking longer to respond. All-in-all, centralized microservice configuration is tricky, and offers a valid criticism of the entire approach in production.

Nonetheless, we were determined to find a solution. The advantages of microservices far outweigh even this problem. They really do make continuous deployment very simple, and provide you with meaningful ways to measure the health of your system. There had to be a way to let microservices discover each other dynamically, and without a central point of failure.

It was with some interest that we noticed what Uber was doing with the SWIM algorithm. It’s powerful stuff. Essentially it lets a microservice join a network of other microservices, and then they all share information. But not by broadcasting, which has scaling issues, but by infection. Information (such as the start up of a new microservice) moves through the network like an infection, with neighbors infecting each other. Get the mechanics right, throw in a little randomness, and you get fantastic performance and scalability. You also know very quickly if a microservice is unhealthy. It’s pretty sweet!

We wanted to use it, but there was another microservice function we had to build first – client-side load-balancing. You put a little load-balancer inside your client microservice, rather than using an external one (such as nginx or HAProxy). Netflix’s ribbon is a great example. The philosophy of Seneca is that all configurations have their place, and we wanted to offer client-side load-balancing as a possibility.

The trick is to make the balancer dynamically reconfigurable. The balancer is a transport, so it routes any messages that match remote patterns to remote services. Now we also use Seneca’s strengths to make this independent of the transport. You compose the balancer together with the underlying transport, and you can balance over any remote mechanism – HTTP end points, message buses, TCP streams, etc. Any combination of message pattern and transport is possible (you can see why fans of functional programming get excited by composition).

The next step is to provide a mesh networking plugin. All that plugin does is join the mesh of microservices using the SWIM algorithm (Thanks Rui Hu!). It then announces to the world the message patterns that the current microservice wants to listen for. The pattern information is disseminated throughout the network, and the client-side load-balancers dynamically add the new microservice to their balance tables. The balancer is able to support both actor (where listening services round-robin messages) and publish (where all listening services get each message) modes of operation. This provides you with a complete microservice network configuration. Except there is no configuration!

At the moment, our implementation still depends on “well-known” entry points. You have to run a few base nodes at predetermined locations, so that microservices know where to look to join the network – Peter is fixing that one for us, and soon the network will be completely self-managing.

With mesh networking, Seneca has now made microservice service discovery pretty much a solved problem. Even if we do say so ourselves!

Welcome to the Community!

It has been an honor, and privilege, to start and then participate in an active and growing open source project. It is really very special when people put so much trust in your code that they use it in production. It’s easy to forget how significant that is. And I am incredibly grateful to everybody that has contributed to Seneca over the years – Thank You!

We want to be a great project to contribute to, a safe project for any developer, and to be friendly community. We’re lucky that our plugin architecture gives us a simple mechanism for contributions, and also allows contributors to do things their own way. We will curate the main Seneca organization to have a consistent and well-tested set of plugins, and of course some rules will be needed to do that. That said, we want to live by principles, not regulations.

The microservices architecture is very young, and is fertile territory for research and experimentation. This is our contribution.

Posted in Node.js, Uncategorized | 13 Comments

Monolithic Node.js

Are large-scale Node.js systems possible? Empirically, the answer is yes. Walmart and Paypal have both shown that it can be done. The quick criticism is that you need 10X engineers. This a classic, and well-founded criticism. New ways of doing things are often found to be exceptionally productive, precisely because brilliant minds self-select for the new and interesting.

So let’s rephrase the question. Are large-scale Node.js systems possible with mainstream developers? If you believe that these large-scale Node.js systems will resemble the large-scale Java and .Net systems you have known and loved, then the answer is, emphatically, no. JavaScript is far too weak a language to support the complexity inherent in systems of such scale. It’s not exactly type-safe, and half the language is unusable. There’s a reason the best-selling book on the language is called JavaScript, The Good Parts.

Despite this, we’ve managed to build quite a few large-scale systems at my company, nearForm. Here’s what we’ve learned, and how we do it.

The defining attribute of most large-scale, mainstream traditional systems is that they are monolithic. That is, a single large codebase, with many files, thousands of classes, and innumerable configuration files (in XML, if you’re lucky). Trying to build a system like this in JavaScript is indeed the path to madness. The visceral rejection of Node.js that you see from some quarters is often the spidey-sense of an experienced enterprise developer zapping them between the eyes. JavaScript? No! This reaction is entirely justified. Java and .Net have been designed to survive enterprise development. They enable monolithic architecture.

There are of course systems built in Java and .Net that are not monolithic, that are more structured. I’ve built in both styles myself. But it takes effort, and even the best systems fall to technical debt over time. It’s too easy to fall back into the monolithic trap.

Monolithic Systems are Bad

What is so bad about monolithic systems anyway? What does it mean for a system to be “monolithic”? The simplest definition is a system that cannot survive the loss of any of its parts. You pull one part out, and the whole thing fails. Each part is connected to the others, and interdependent on them.

The term monolith means single stone, and is derived from the ancient greek. The ancient city of Petra in modern-day Jordan is one of the best examples of monolithic architecture. Its buildings are constructed in one piece, hewn directly from the cliff face of a rocky mountain. It also provides a perfect example of the failure mode of monolithic systems. In 363AD an earthquake damaged many of the buildings, and the complex system of aqueducts. As these were carved directly into the mountain, they were impossible to repair, and the city fell into terminal decline.

So it goes with monolithic software. Technical debt, the complexity built into the system over time, makes the system impossible to repair or extend at reasonable cost as the environment changes. You end up with things like month-long code freezes in December so that the crucial Christmas shopping season is not affected by unknowable side-effects.

The other effect of monolithic software is more pernicious. It generates software development processes and methodologies. Because the system has so many dependencies, you must be very careful how you let developers change it. A lot of effort must be expended to prevent breakage. Our current approaches, from waterfall to agile, serve simply to enable monolithic systems. They enable us to build bigger and add more complexity. Even unit testing is an enabler. You thought unit testing was the good guy? It’s not. If you do it properly, it just lets you build bigger, not smarter.

Modular Systems are Good

So what are we supposed to do, as software developers, to avoid building monolithic systems. There are no prizes for knowing the answer. Build modular systems! The definition of a modular system is simply the inverse: each part stands alone, and the system is still useful when parts are missing.

Modular software should therefore be composed of components, each, by definition, reusable in many contexts. The idea of reusable software components is one of the Holy Grails of software development.

The greatest modular system humanity has created to date is the intermodal shipping container. This is a steel box that comes in a standard set of sizes, most commonly 8′ wide, 8’6” tall, and 20 or 40 feet long. This standardisation enables huge efficiency in the transport of goods. Each container is re-usable and has a standardised “API”, so to speak.

Sadly, software components are nothing like this. Each must necessarily have it’s own API. There are dependency hierarchies. There are versioning issues. Nonetheless, we persist in trying to build modular systems, because we know it is the only real way to deal with complexity.

There have been some success stories, mostly at the infrastructure level. UNIX pipes, and the UNIX philosophy of small programs that communicate over pipes, works rather well in practice. But it only takes you so far.

Other attempts, such as CORBA, or Microsoft’s OLE, have suffered under their own weight. We’ve grown rather fond of JSON-based REST services in recent years. Anyone who’s been at the mercy of third party APIs, and I’m looking at you, Facebook, will know that this is no promised-land either.

Objects are Hell

The one big idea for modular software that seems to have stuck to the wall, is the object-oriented paradigm.

Objects are supposed to be reusable representations of the world, both abstract and real. The tools of object-oriented development; interfaces, inheritance, polymorphism, dynamic methods, and so on, are supposed to provide us with the power to represent anything we need to build. These tools are supposed to enable us to build objects in a modular reusable way.

The fundamental idea of objects is really quite broken when you think about it. The object approach is to break the world into discrete entities with well-defined properties. This assumes that the world will agree to being broken up in such a way. Anyone who has tried to create an well-designed inheritance hierarchy will be familiar with how this falls apart.

Let’s say we have a Ball class, representing, well, a ball. We then define a BouncingBall, and a RollingBall, both inheriting from the base Ball class, each having suitable extensions of properties and methods. What happens when we need a ball than can both bounce and roll

Admittedly, inheritance is an easy target for criticism, and the solution to this problem is well-understood. Behaviours (bouncing and rolling) are not essential things, and should be composed instead. That this is known does not prevent a great deal of inheritance making it’s way into production systems. So the problem remains.

Objects are really much worse than you think. They are derived from a naïve mathematical view of the world. The idea that there are sets of pure, platonic things, all of which share the same properties and characteristics. On the surface this seems reasonable. Scratch the surface and you find that this idea breaks down in the face of the real world. The real world is messy. It even breaks down in the mathematical world. Does the set of all sets that do not contain themselves contain itself? You tell me.

The ultimate weakness of objects is that they are simply enablers for more monolithic development. Think about it. Objects are grab bags of everything a system needs. You have properties, private and public. You have methods, perhaps overridden above or below. You have state. There are countless systems suffering from the Big Ball of Mud anti-pattern, where a few enormous classes contain most of the tangled logic. There are just too many different kinds of thing that can go into an object.

But objects have one last line of defense. Design patterns! Let’s take a closer look at what design patterns can do for us.

Bad Patterns are Beautiful

In 1783 Count Hans Axel von Fersen commissioned a pocket watch for the then Queen of France, Marie Antoinette. The count was known to have had a rather close relationship with the Queen, and the extravagance of the pocket watch suggests it was very close indeed. The watch was to contain every possible chronometric feature of the day; a stopwatch, an audible chime, a power meter, and a thermometer, among others. The master watchmaker, Abraham-Louis Breguet was tasked with the project. Neither Marie Antoinette, Count Fersen, nor Breguet himself lived to see the watch completed. It was finally finished in 1837, by Breguet’s son. It is one of the most beautiful projects to have been delivered late and over-budget.

It is not for nothing that additional features beyond basic time keeping are known as complications in the jargon of the watchmaker. The watches themselves possess a strange property. The more complex they are, the more intricate, the more monolithic, the more beautiful they are considered. But they are not baroque without purpose. Form must follow function. The complexity is necessary, given their mechanical nature.

We accept this because the watches are singular pieces of artistry. You would find yourself under the guillotine along with Marie Antoinette in short order if you tried to justify contemporary software projects as singular pieces of artistry. And yet, as software developers, we revel in the intricacies we can build. The depth of patterns that we can apply. The architectures we can compose.

The complexity of the Marie Antoinette is seductive. It is self-justifying. Our overly complex software is seductive in the same way. What journeyman programmer has not launched headlong into a grand architecture, obsessed by the aesthetic of their newly imagined design? The intricacy is compounded by the failings of their chosen language and platform.

If you have built systems using one of the major object-oriented languages, you will have experienced this. To build a system of any significant size, you must roll out your Gang-of-Four design patterns. We are all so grateful for this bible that we have forgotten to ask a basic question. Why are design patterns needed at all? Why do you need to know 100+ patterns to use objects safely? This is code smell!

Just because the patterns work, does not mean they are good. We are treating the symptoms, not the disease. There is truth in the old joke that all programming languages end up being incomplete, buggy versions of LISP. That’s pretty much what design patterns are doing for you. This is not an endorsement of functional programming either, or any language structure. They all have similar failings. I’m just having a go at the object-oriented languages because it’s easy!

Just because you can use design patterns in the right way does not mean using design patterns is the right thing to do. There is something fundamentally wrong with languages that need design patterns, and I think I know what it is.

But before we get into that, let’s take a look at a few things that have the right smell. Let’s take a look at the Node.js module system.

Node.js Modules are Sweet

If you’ve built systems in Java or .Net, you’ll have run into the dreaded problem of dependency hell. You’re trying to use component A, which depends on version 1 of component C. But you’re also trying to use component B, which depends on version 2 of component C. You end up stuck in a catch-22, and all of the solutions are a kludge. Other platforms, like Ruby or Google’s new Go language may make it easier to find and install components, but they don’t solve this problem either.

As an accident of history, JavaScript has no built-in module system (at least, not yet). This weakness has turned out to be a hidden strength. Not only has it created the opportunity to experiment with different approaches to defining modules, but it also means that all JavaScript module systems must survive within the constraints of the language. Modules end up being local variables that do not pollute the global namespace. This means that module A can load version 1 of module C, and module B can load version 2 of module C, and everything still works.

The Node Package Manager, npm, provides the infrastructure necessary to use modules in practice. As a result, Node.js projects suffer very little dependency hell. Further, it means that Node.js modules can afford to be small, and have many dependencies. You end up with a large number of small focused modules, rather than a limited set of popular modules. In other platforms, this limited set of popular modules end up being monolithic because they need to be self-sufficient and do as much as possible. Having dependencies would be death.

Modules also naturally tend to have a much lower public API to code ratio. They are far more encapsulated than objects. You can’t as a rule misuse them in the same way objects can be misused. The only real way to extend modules is to compose them into new modules, and that’s a good thing.

The Node.js module system, as implemented by npm, is the closest anybody has come in a long time to a safe mechanism for software re-use. At least half the value of the entire Node.js platform lies in npm. You can see this from the exponential growth rate of the number of modules, and the amount of downloads.

Node.js Patterns are Simple

If you count the number of spirals in the seed pattern at the centre of a sunflower, you’ll always end up with a fibonacci number. This is a famous mathematical number sequence, where the next fibonacci number is equal to the sum of the previous two. You start with 0 and 1, and it continues 1, 2, 3, 5, 8, 13, 21, 34, and so on. The sequence grows quickly, and calculating later fibonacci numbers is CPU intensive due to their size.

There’s a famous blog post attacking Node.js for being a terrible idea. An example is given of a recursive algorithm to calculate fibonacci numbers. As this is particularly CPU intensive, and as Node.js only has one thread, the performance of this fibonacci service is terrible. Many rebuttals and proposed solutions later, it is still the case that Node.js is single-threaded, and CPU intensive work will still kill your server.

If you come from a language that supports threads, this seems like a killer blow. How can you possibly build real systems? The are two things that you do. You delegate concurrency to the operating system, using processes instead of threads. And you avoid CPU intensive tasks in code that needs to respond quickly. Put that work on a queue and handle it asynchronously. This turns out to be more than sufficient in practice.

Threads are notoriously difficult things to get right. Node.js wins by avoiding them altogether. Your code becomes much easier to reason about.

This is the rule for many things in Node, when compared to object-oriented languages. There is simply no need for a great many of the patterns and architectures. There was a discussion recently on the Node.js mailing list about how to implement the singleton pattern in JavaScript. While you can do this in JavaScript using prototypical inheritance, there’s really very little need in practice, because modules tend to look after these things internally. In fact, the best way to achieve the same thing using Node.js is to implement a standalone service that other parts of your system communicate with over the network.

Node.js does require you to learn some new patterns, but they are few in number, and have broad application. The most iconic is the callback pattern, where you provide a function that will be called when the system has more data for you to work with. The signature of this function is always: an error object first, if there was an error. Otherwise the first argument is null. The second argument is always the result data.

The callback function arises naturally from the event handling loop that Node.js uses to dispatch data as it comes in and out of the system. JavaScript, the language, designed for handling user interface events in the browser, turns out to be well-suited to handling data events on the server-side as a result.

The first thing you end up doing with Node.js when you start to use it is to to create callback spaghetti. You end up with massively indented code, with callback within callback. After some practice you quickly learn to structure your code using well-named functions, chaining, and libraries like the async module. In practice, callbacks, while they take some getting used to, do not cause any real problems.

What you do get is a common interface structure for almost all module APIs. This is in stark contrast to the number of different ways you can interact with object-oriented APIs. The learning surface is greatly reduced.

The other great pattern in Node.js is streams. These are baked into the core APIs, and they let you manipulate and transform data easily and succinctly. Data flows are such a common requirement that you will find the stream concept used all over the place. As with callbacks, the basic structure is very simple. You pipe data from one stream to another. You compose data manipulations by building up sets of streams connected by pipes. You can even have duplex streams that can read and write data in both directions. This abstraction leads to very clean code.

Because JavaScript is a semi-functional language, and because it does not provide all the trappings of traditional object-oriented code, you end up with a much smaller set of core patterns. Once you learn them, you can read and understand most of the code you see. It is not uncommon in Node.js projects to review the code of third party modules to gain a greater understanding of how they work. The effort you need to expend to do this is substantially less than for other platforms.

Thinking at the Right Level

Our programming languages should let us think at the right level, the level of the problems we are trying to solve. Most languages fail miserably at this. To use an analogy, we’d like to think in terms of beer, but we end up thinking in terms of the grains that were used to brew the beer.

Our abstractions are at too low a level, or end up being inappropriate. Our languages do not enable us to easily compose their low level elements into things at the right level. The complexity in doing so trips us up, and we end up with broken, leaky abstractions.

Most of the time, when we build things with software, we are trying to model use cases. We are trying to model things that happen in the world. The underlying entities are less important. There is an important data point in the observation that beginning programmers write naïve procedural code, and only later learn to create appropriate data structures. This is telling us something about the human mind. We are able to get things done by using our intelligence to accommodate differences in the entities that make up our world.

A bean-bag chair is still a chair. Every human knows how to sit in one. It has no back, and no legs, but you can still perform the use-case: sitting. If you’ve modeled a chair as a object with well-defined properties, such as assuming it has legs, you fail in cases like these.

We know that the code to send an email should not be tightly coupled to the API of the particular email sending service we are using. And yet if you create an abstract email sending API layer, it inevitably breaks when you change the implementation because you can’t anticipate all the variants needed. It’s much better to be able to say, “send this email, here’s everything I’ve got, you figure it out!”

To build large-scale systems you need to represent this action-oriented way of looking at the world. This is why design patterns fail. They are all about static representations of perfect ontologies. The world does not work like that. Our brains do not work like that.

How does this play out in practice? What are the new “design patterns”? In our client projects, we use two main tools: micro-services, and pattern matching.

Micro-Services Scale

We can use biological cells as an inspiration for building robust scalable systems. Biological cells have a number of interesting properties. They are small and single-purpose. There are many of them. They communicate using messages. Death is expected and natural.

Let’s apply this to our software systems. Instead of building a monolithic 100 000 line codebase, build 100 small services, each 100 lines long. Fred George, (the inventor of programmer anarchy) one of the biggest proponents of this approach, calls these small programs micro-services.

The micro-services approach is a radically different way of building systems. The services each perform a very limited task. This has the nice effect that they are easy to verify. You can eye-ball them. Testing is much less important.

On the human side, it also means that the code is easy to rewrite, even in a different language. If a junior engineer writes a bad implementation, you can throw it away. They can be written in pretty much any language. If you don’t understand the code, you throw it away and rewrite. Micro-services are easy to replace.

Micro-services communicate with each other by sending messages. You can send these messages directly over internal HTTP, or use a message queue for more scale. In fact, the transport mechanism does not matter all that much. From the perspective of the service, it just deals with whatever messages come it’s way. When you’re building services in Node.js, JSON is the most natural formatting choice. It works well for other languages too.

They are easy to scale. They offer a much finer grained level of scaling then simply adding more servers running a single system. You just scale the parts you need. We’ve not found the management of all these processes to be too onerous either. In general you can use monitoring utilities to ensure that the correct number of services stay running.

Death becomes relatively trivial. You’re going to have more than one instance of important services running, and restarts are quick. If something strange happens, just die and restart. In fact, you can make your system incredibly robust if you build preprogrammed death into the services, so that they die and restart randomly over time. This prevents the build up of all sorts of corruption. Micro-services let you behave very badly. Deployments to live systems are easy. Just start replacing a few services to see what happens. Rolling back is trivial – relaunch the old versions.

Micro-services also let you scale humans, both at the individual and team level. Individual brains find micro-services much easier to work with, because the scope of consideration is so small, and there are few side-effects. You can concentrate on the use-case in hand.

Teams also scale. It’s much easier to break up the work into services, and know that there will be few dependencies and blockages between team members. This is really quite liberating when you experience it. No process required. It flows naturally out of the architecture.

Finally, micro-services let you map your use-cases to independent units of software. They allow you to think in terms of what should happen. This let’s you get beyond the conceptual changes that objects impose.

Pattern Matching Rules

Micro-services can bring you a long way, but we’ve found that you need a way to compose them so that they can be reused and customised. We use pattern matching to do this.

This is once more about trying to think at the right level. The messages that flow between services need to find their way to the right service, in the right form, with the right preparation.

The pattern matching does not need to be complex. In fact, the simpler the better. This is all about making systems workable for human minds. We simply test the values of the properties in the message, and if you can match more properties than some other service, you win.

This simple approach makes it very easy to customise behaviour. If you’ve ever had to implement sales tax rules, you’ll know how tricky they can be. You need to take into account the country, perhaps the state, the type of good, the local bylaws. Patterns make this really easy. Start with the general case, and add any special cases as you need them. The messages may or may not contain all the properties. It’s not a problem, because special properties are only relevant for special cases anyway.

Cross-cutting concerns are also easy to support with pattern matching. For example, to log all the message related to saving data, simply grab those as they appear, make the log entry, and then send the message on its way. You can add permissions, caching, multiple databases. All without affecting the underlying services. Of course, some work is needed to layer up the pattern matching the way you need it, but this is straightforward in practice.

The greatest benefit that we have seen is the ability to compose and customise services. Software components are only reusable to the extent that they can be reused. Pattern matching lets you do this in a very decoupled way. Since all you care about is transforming the message in some way, you won’t break lower services so long as your transformations are additive.

A good example here is user registration. You might have a basic registration service that saves the user to a database. But then you’ll want to do things like send out a welcome email, configure their settings, verify their credit card, or any number of project-specific pieces of business logic. You don’t extend user registration by inheriting from a base class. You extend by watching out for user registration messages. There is very little scope for breakage.

Obviously, while these two strategies, micro-services, and pattern matching, can be implemented and used directly, it’s much easier do so in the context of a toolkit. We have, of course, written one for Node.js, called Seneca.

Galileo’s Moons

We’re building our business on the belief that the language tools that we have used to build large systems in the past are insufficient. They do not deliver. The are troublesome and unproductive.

This is not surprising. Many programming languages, and object-oriented ones in particular, are motivated by ideas of mathematical purity. They have rough edges and conceptual black holes, because they were easier to implement that way. JavaScript is to an extent guilty of all this too. But it is a small language, and it does give us the freedom to work around these mistakes. We’re not in the business of inventing new programming languages, so JavaScript will have to do the job. We are in the business of doing things better. Node.js and JavaScript help us do that, because they make it easy to work with micro-services, and pattern matching, our preferred approach to large-scale systems development.

In 1610, the great italian astronomer, Galileo Galilei, published a small pamphlet describing the discoveries he had made with his new telescope. This document, Sidereus Nuncius (the Starry Messenger) changed our view of the world.

Galileo had observed that four stars near the planet Jupiter behaved in a very strange way. They seemed to move in a straight line backwards and forwards across the planet. The only reasonable explanation is that there were moons orbiting Jupiter, and Galileo was observing them side-on. This simple realisation showed that some celestial bodies did not orbit the earth, and ultimately destroyed the Ptolemaic theory, that the sun orbited the earth.

We need to change the way we think about programming. We need to start from the facts of our own mental abilities. The thought patterns we are good at. If we align our programming languages with our abilities, we can build much larger systems. We can keep one step ahead of technical debt, and we can make programming fun again.

Posted in Uncategorized | 74 Comments