Belle tries to help Poppy come to terms with the recent revelations, but Poppy has other ideas. A cruel trick by dominatrix Charlotte leaves Belle holding the 'baby'.
Belle's ability to separate work from pleasure is tested when she meets an alluring and dangerous new client. Meanwhile, Poppy's curiosity gets the better of her.
Belle struggles after the reality of being a girlfriend, a chaperon and a high-class escort hit home, and when she meets Ben's mother there is an even bigger shock in store for her.
Belle deals with a new business proposition, an uninvited house guest, a challenging client and the all-important question hanging over her head: should she commit to best friend Ben?
In the Season 3 finale, the secret tryst between Duncan and Bambi is revealed at the latter's wedding, leaving Belle heartbroken but providing fodder for her new book's final chapter. Ben reveals his true feelings at the eleventh hour.
@davorg davorg pushed to master in davorg/uptime · June 3, 2025 00:34
1 commit to master
  • @upptime-bot 7abcb10
    🍱 Update graphs [skip ci]
@davorg davorg pushed to master in davorg/uptime · June 3, 2025 00:09
2 commits to master
  • @upptime-bot a944d08
    🗃️ Update status summary [skip ci] [upptime]
  • @upptime-bot d2cfd8c
    📝 Update summary in README [skip ci] [upptime]
@davorg davorg pushed to master in davorg/uptime · June 2, 2025 23:01
2 commits to master
@davorg davorg pushed to main in PerlToolsTeam/perl-ads · June 2, 2025 11:11
1 commit to main
  • @davorg 90ec122
    Document "publisher" attribute
@davorg davorg pushed to main in PerlToolsTeam/perl-ads · June 2, 2025 11:07
1 commit to main
  • @davorg b861668
    Add "publisher" field to ads

Last summer, I wrote a couple of posts about my lightweight, roll-your-own approach to deploying PSGI (Dancer) web apps:

In those posts, I described how I avoided heavyweight deployment tools by writing a small, custom Perl script (app_service) to start and manage them. It was minimal, transparent, and easy to replicate.

It also wasn’t great.

What Changed?

The system mostly worked, but it had a number of growing pains:

  • It didn’t integrate with the host operating system in a meaningful way.
  • Services weren’t resilient — no automatic restarts on failure.
  • There was no logging consolidation, no dependency management (e.g., waiting for the network), and no visibility in tools like systemctl.
  • If a service crashed, I’d usually find out via curl, not journalctl.

As I started running more apps, this ad-hoc approach became harder to justify. It was time to grow up.

Enter psgi-systemd-deploy

So today (with some help from ChatGPT) I wrote psgi-systemd-deploy — a simple, declarative deployment tool for PSGI apps that integrates directly with systemd. It generates .service files for your apps from environment-specific config and handles all the fiddly bits (paths, ports, logging, restart policies, etc.) with minimal fuss.

Key benefits:

    • Declarative config via .deploy.env
    • Optional .env file support for application-specific settings
    • Environment-aware templating using envsubst
    • No lock-in — it just writes systemd units you can inspect and manage yourself
  • Safe — supports a --dry-run mode so you can preview changes before deploying
  • Convenient — includes a run_all helper script for managing all your deployed apps with one command

A Real-World Example

You may know about my Line of Succession web site. This is one of the Dancer apps I’ve been talking about. To deploy it, I wrote a .deploy.env file that looks like this:

WEBAPP_SERVICE_NAME=succession
WEBAPP_DESC="British Line of Succession"
WEBAPP_WORKDIR=/opt/succession
WEBAPP_USER=succession
WEBAPP_GROUP=psacln
WEBAPP_PORT=2222
WEBAPP_WORKER_COUNT=5
WEBAPP_APP_PRELOAD=1

And optionally a .env file for app-specific settings (e.g., database credentials). Then I run:

$ /path/to/psgi-systemd-deploy/deploy.sh

And that’s it. The app is now a first-class systemd service, automatically started on boot and restartable with systemctl.

Managing All Your Apps with run_all

Once you’ve deployed several PSGI apps using psgi-systemd-deploy, you’ll probably want an easy way to manage them all at once. That’s where the run_all script comes in.

It’s a simple but powerful wrapper around systemctl that automatically discovers all deployed services by scanning for .deploy.env files. That means no need to hard-code service names or paths — it just works, based on the configuration you’ve already provided.

Here’s how you might use it:

# Restart all PSGI apps
$ run_all restart

# Show current status
$ run_all status

# Stop them all (e.g., for maintenance)
$ run_all stop

And if you want machine-readable output for scripting or monitoring, there’s a --json flag:

$ run_all --json is-active | jq .
[
  {
    "service": "succession.service",
    "action": "is-active",
    "status": 0,
    "output": "active"
  },
  {
    "service": "klortho.service",
    "action": "is-active",
    "status": 0,
    "output": "active"
  }
]

Under the hood, run_all uses the same environment-driven model as the rest of the system — no surprises, no additional config files. It’s just a lightweight helper that understands your layout and automates the boring bits.

It’s not a replacement for systemctl, but it makes common tasks across many services far more convenient — especially during development, deployment, or server reboots.

A Clean Break

The goal of psgi-systemd-deploy isn’t to replace Docker, K8s, or full-featured PaaS systems. It’s for the rest of us — folks running VPSes or bare-metal boxes where PSGI apps just need to run reliably and predictably under the OS’s own tools.

If you’ve been rolling your own init scripts, cron jobs, or nohup-based hacks, give it a look. It’s clean, simple, and reliable — and a solid step up from duct tape.

➡ View the code on GitHub

The post Deploying Dancer Apps – The Next Generation first appeared on Perl Hacks.

Last summer, I wrote a couple of posts about my lightweight, roll-your-own approach to deploying PSGI (Dancer) web apps:

In those posts, I described how I avoided heavyweight deployment tools by writing a small, custom Perl script (app_service) to start and manage them. It was minimal, transparent, and easy to replicate.

It also wasn’t great.

What Changed?

The system mostly worked, but it had a number of growing pains:

  • It didn’t integrate with the host operating system in a meaningful way.
  • Services weren’t resilient — no automatic restarts on failure.
  • There was no logging consolidation, no dependency management (e.g., waiting for the network), and no visibility in tools like systemctl.
  • If a service crashed, I’d usually find out via curl, not journalctl.

As I started running more apps, this ad-hoc approach became harder to justify. It was time to grow up.

Enter psgi-systemd-deploy

So today (with some help from ChatGPT) I wrote psgi-systemd-deploy — a simple, declarative deployment tool for PSGI apps that integrates directly with systemd. It generates .service files for your apps from environment-specific config and handles all the fiddly bits (paths, ports, logging, restart policies, etc.) with minimal fuss.

Key benefits:

    • Declarative config via .deploy.env
    • Optional .env file support for application-specific settings
    • Environment-aware templating using envsubst
    • No lock-in — it just writes systemd units you can inspect and manage yourself
  • Safe — supports a --dry-run mode so you can preview changes before deploying

  • Convenient — includes a run_all helper script for managing all your deployed apps with one command

A Real-World Example

You may know about my Line of Succession web site. This is one of the Dancer apps I’ve been talking about. To deploy it, I wrote a .deploy.env file that looks like this:

WEBAPP_SERVICE_NAME=succession
WEBAPP_DESC="British Line of Succession"
WEBAPP_WORKDIR=/opt/succession
WEBAPP_USER=succession
WEBAPP_GROUP=psacln
WEBAPP_PORT=2222
WEBAPP_WORKER_COUNT=5
WEBAPP_APP_PRELOAD=1

And optionally a .env file for app-specific settings (e.g., database credentials). Then I run:

$ /path/to/psgi-systemd-deploy/deploy.sh

And that’s it. The app is now a first-class systemd service, automatically started on boot and restartable with systemctl.

Managing All Your Apps with run_all

Once you’ve deployed several PSGI apps using psgi-systemd-deploy, you’ll probably want an easy way to manage them all at once. That’s where the run_all script comes in.

It’s a simple but powerful wrapper around systemctl that automatically discovers all deployed services by scanning for .deploy.env files. That means no need to hard-code service names or paths — it just works, based on the configuration you’ve already provided.

Here’s how you might use it:

# Restart all PSGI apps
$ run_all restart

# Show current status
$ run_all status

# Stop them all (e.g., for maintenance)
$ run_all stop

And if you want machine-readable output for scripting or monitoring, there’s a --json flag:

$ run_all --json is-active | jq .
[
  {
    "service": "succession.service",
    "action": "is-active",
    "status": 0,
    "output": "active"
  },
  {
    "service": "klortho.service",
    "action": "is-active",
    "status": 0,
    "output": "active"
  }
]

Under the hood, run_all uses the same environment-driven model as the rest of the system — no surprises, no additional config files. It’s just a lightweight helper that understands your layout and automates the boring bits.

It’s not a replacement for systemctl, but it makes common tasks across many services far more convenient — especially during development, deployment, or server reboots.

A Clean Break

The goal of psgi-systemd-deploy isn’t to replace Docker, K8s, or full-featured PaaS systems. It’s for the rest of us — folks running VPSes or bare-metal boxes where PSGI apps just need to run reliably and predictably under the OS’s own tools.

If you’ve been rolling your own init scripts, cron jobs, or nohup-based hacks, give it a look. It’s clean, simple, and reliable — and a solid step up from duct tape.

➡ View the code on GitHub

The post Deploying Dancer Apps – The Next Generation first appeared on Perl Hacks.

Watched on Wednesday May 28, 2025.

Watched on Tuesday May 27, 2025.

Watched on Saturday May 24, 2025.

Watched on Thursday May 22, 2025.

Watched on Tuesday May 20, 2025.

Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.

That changed when I started using AI as a regular part of my development workflow.

Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.

It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.

The Problem

Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/ directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB environment variable so that Perl can find those modules.

But I’ve got a lot of Perl projects — often nested in folders like ~/git, and sometimes with extra lib/ directories for testing or shared code.  And I switch between them frequently. Typing:

export PERL5LIB=lib

…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.

What I wanted was this:

  • Every time I cd into a directory, if there are any valid lib/ subdirectories beneath it, set PERL5LIB automatically.

  • Only include lib/ dirs that actually contain .pm files.

  • Skip junk like .vscode, blib, and old release folders like MyModule-1.23/.

  • Don’t scan the entire world if I cd ~/git, which contains hundreds of repos.

  • Show me what it’s doing, and let me test it in dry-run mode.

The Solution

With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh, and it:

  • Wraps cd() to trigger a scan after every directory change

  • Finds all qualifying lib/ directories beneath the current directory

  • Filters them using simple rules:

    • Must contain .pm files

    • Must not be under .vscode/, .blib/, or versioned build folders

  • Excludes specific top-level directories (like ~/git) by default

  • Lets you configure everything via environment variables

  • Offers verbose, dry-run, and force modes

  • Can append to or overwrite your existing PERL5LIB

You drop it in your ~/.bashrc (or wherever you like), and your shell just becomes a little bit smarter.

Usage Example

source ~/bin/perl5lib_auto.sh

cd ~/code/MyModule
# => PERL5LIB set to: /home/user/code/MyModule/lib

PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp
# => [PERL5LIB] Found 2 eligible lib dir(s):
# =>   /home/user/code/AnotherApp/lib
# =>   /home/user/code/AnotherApp/t/lib
# => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib

You can also set environment variables to customise behaviour:

export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy"
export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib"
export PERL5LIB_LIB_CAP=5
export PERL5LIB_APPEND=1

Or simulate what it would do:

PERL5LIB_DRYRUN=1 cd ~/code/BigProject

Try It Yourself

The full script is available on GitHub:

👉 https://github.com/davorg/perl5lib_auto

I’d love to hear how you use it — or how you’d improve it. Feel free to:

  • ⭐ Star the repo

  • 🐛 Open issues for suggestions or bugs

  • 🔀 Send pull requests with fixes, improvements, or completely new ideas

It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.

What useful little utilities have you written with help from an AI pair-programmer?

The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.

Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.

That changed when I started using AI as a regular part of my development workflow.

Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.

It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.

The Problem

Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/ directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB environment variable so that Perl can find those modules.

But I’ve got a lot of Perl projects — often nested in folders like ~/git, and sometimes with extra lib/ directories for testing or shared code. And I switch between them frequently. Typing:

export PERL5LIB=lib

…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.

What I wanted was this:

  • Every time I cd into a directory, if there are any valid lib/ subdirectories beneath it, set PERL5LIB automatically.

  • Only include lib/ dirs that actually contain .pm files.

  • Skip junk like .vscode, blib, and old release folders like MyModule-1.23/.

  • Don’t scan the entire world if I cd ~/git, which contains hundreds of repos.

  • Show me what it’s doing, and let me test it in dry-run mode.

The Solution

With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh, and it:

  • Wraps cd() to trigger a scan after every directory change

  • Finds all qualifying lib/ directories beneath the current directory

  • Filters them using simple rules:

  • Excludes specific top-level directories (like ~/git) by default

  • Lets you configure everything via environment variables

  • Offers verbose, dry-run, and force modes

  • Can append to or overwrite your existing PERL5LIB

You drop it in your ~/.bashrc (or wherever you like), and your shell just becomes a little bit smarter.

Usage Example

source ~/bin/perl5lib_auto.sh

cd ~/code/MyModule
# => PERL5LIB set to: /home/user/code/MyModule/lib

PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp
# => [PERL5LIB] Found 2 eligible lib dir(s):
# => /home/user/code/AnotherApp/lib
# => /home/user/code/AnotherApp/t/lib
# => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib

You can also set environment variables to customise behaviour:

export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy"
export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib"
export PERL5LIB_LIB_CAP=5
export PERL5LIB_APPEND=1

Or simulate what it would do:

PERL5LIB_DRYRUN=1 cd ~/code/BigProject

Try It Yourself

The full script is available on GitHub:

I’d love to hear how you use it — or how you’d improve it. Feel free to:

  • ⭐ Star the repo

  • 🐛 Open issues for suggestions or bugs

  • 🔀 Send pull requests with fixes, improvements, or completely new ideas

It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.

What useful little utilities have you written with help from an AI pair-programmer?

The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.

You might know that I publish books about Perl at Perl School. What you might now know is that I also publish more general technical books at Clapham Technical Press. If you scroll down to the bottom of that page, you’ll see a list of the books that I’ve published. You’ll also see evidence of the problem I’ve been solving this morning.

Books tend to have covers that are in a portrait aspect ratio. But the template I’m using to display them requires images in a landscape aspect ratio. This is a common enough problem. And, of course, we’ve developed a common way of getting around it. You’ll see it on that page. We create a larger version of the image (large enough to fill the width of where the image is displayed), apply some level of Gaussian blur to the image and insert a new copy of the image over that. So we get our original image with a tastefully blurred background which echoes the colour of the image. ChatGPT tells me this is called a “Blurred Fill”.

So that’s all good. But as I’m publishing more books, I need to create these images on a pretty regular basis. And, of course, if I do something more than three or four times, I will want to automate.

A while ago, I wrote a simple program called “blur” that used Imager to apply the correct transformations to an image. But this morning, I decided I should really make that program a bit more useful. And release it to CPAN. So that’s what I’ve been doing.

The Problem

Adjusting images to fit various aspect ratios without losing essential content or introducing unsightly borders is a frequent challenge. Manually creating a blurred background for each image is time-consuming and inefficient, especially when dealing with multiple images or integrating into automated workflows.

The Solution: App::BlurFill

App::BlurFill is a Perl module and CLI tool designed to streamline the process of creating images with blurred backgrounds. It takes an input image and generates a new image where the original is centred over a blurred version of itself, adjusted to the specified dimensions.

How It Works

  1. Input: Provide the source image along with the desired width and height.
  2. Processing:
    • The tool creates a blurred version of the original image to serve as the background.
    • It then overlays the original image, centred, onto this background.
  3. Output: A new image file with the specified dimensions, combining the original image and its blurred background.

Installation and Usage

Install via CPAN:

cpanm App::BlurFill

Then to use the CLI tool:

blurfill --width=800 --height=600 input.jpg

This command will generate input_blur.jpg with the specified dimensions.

Web API

App::BlurFill also includes a web interface built with Dancer2. You can start the web server and send POST requests with an image file to receive the processed image in response.

Example using curl:

curl -OJ -X POST http://localhost:5000/blur -F "image=@input.jpg"

The response will be the new image file, ready for use.

Under the Hood

App::BlurFill is written in Perl 5.40, using the new perlclass feature. It makes use of the Imager module for image processing tasks. Currently, it supports JPG, PNG and GIF.

What’s Next?

Future enhancements may include:

  • Support for modern image formats like WebP.
  • More customisation options.
  • A Docker container to make it easier to set up and use.
  • Maybe a hosted version. Maybe it’s even a business idea.

App::Blurred aims to simplify the task of creating visually consistent images across various platforms and devices. Feedback and contributions are welcome to help improve its functionality and usability.

Please let me know if you find it useful or if there are extra features you would find useful.

Oh, and why not buy some Clapham Technical Press books!

Update: I forgot to include a link to the GitHub repository. It’s at https://github.com/davorg-cpan/app-blurfill

The post Reformating images with App::BlurFill first appeared on Perl Hacks.

You might know that I publish books about Perl at Perl School. What you might now know is that I also publish more general technical books at Clapham Technical Press. If you scroll down to the bottom of that page, you’ll see a list of the books that I’ve published. You’ll also see evidence of the problem I’ve been solving this morning.

Books tend to have covers that are in a portrait aspect ratio. But the template I’m using to display them requires images in a landscape aspect ratio. This is a common enough problem. And, of course, we’ve developed a common way of getting around it. You’ll see it on that page. We create a larger version of the image (large enough to fill the width of where the image is displayed), apply some level of Gaussian blur to the image and insert a new copy of the image over that. So we get our original image with a tastefully blurred background which echoes the colour of the image. ChatGPT tells me this is called a “Blurred Fill”.

So that’s all good. But as I’m publishing more books, I need to create these images on a pretty regular basis. And, of course, if I do something more than three or four times, I will want to automate.

A while ago, I wrote a simple program called “blur” that used Imager to apply the correct transformations to an image. But this morning, I decided I should really make that program a bit more useful. And release it to CPAN. So that’s what I’ve been doing.

The Problem

Adjusting images to fit various aspect ratios without losing essential content or introducing unsightly borders is a frequent challenge. Manually creating a blurred background for each image is time-consuming and inefficient, especially when dealing with multiple images or integrating into automated workflows.

The Solution: App::BlurFill

App::BlurFill is a Perl module and CLI tool designed to streamline the process of creating images with blurred backgrounds. It takes an input image and generates a new image where the original is centred over a blurred version of itself, adjusted to the specified dimensions.

How It Works

  1. Input : Provide the source image along with the desired width and height.
  2. Processing:
    • The tool creates a blurred version of the original image to serve as the background.
    • It then overlays the original image, centred, onto this background.
  3. Output: A new image file with the specified dimensions, combining the original image and its blurred background.

Installation and Usage

Install via CPAN:

cpanm App::BlurFill

Then to use the CLI tool:

blurfill --width=800 --height=600 input.jpg

This command will generate input_blur.jpg with the specified dimensions.

Web API

App::BlurFill also includes a web interface built with Dancer2. You can start the web server and send POST requests with an image file to receive the processed image in response.

Example using curl:

curl -OJ -X POST http://localhost:5000/blur -F "image=@input.jpg"

The response will be the new image file, ready for use.

Under the Hood

App::BlurFill is written in Perl 5.40, using the new perlclass feature. It makes use of the Imager module for image processing tasks. Currently, it supports JPG, PNG and GIF.

What’s Next?

Future enhancements may include:

  • Support for modern image formats like WebP.
  • More customisation options.
  • A Docker container to make it easier to set up and use.
  • Maybe a hosted version. Maybe it’s even a business idea.

App::Blurred aims to simplify the task of creating visually consistent images across various platforms and devices. Feedback and contributions are welcome to help improve its functionality and usability.

Please let me know if you find it useful or if there are extra features you would find useful.

Oh, and why not buy some Clapham Technical Press books!

Update: I forgot to include a link to the GitHub repository. It's at https://github.com/davorg-cpan/app-blurfill

The post Reformating images with App::BlurFill first appeared on Perl Hacks.

I write blog posts in a number of different places:

  • Davblog has been my general blog for about twenty years
  • Perl Hacks is where I write about Perl
  • My Substack newsletter is mostly tech stuff but can also wander into entrepreneurship and other topics

And most of those posts get syndicated to other places:

  • Tech stuff will usually end up on dev.to
  • Non-tech stuff will go to Medium
  • Occasionally, stuff about Perl will be republished on perl.com

It’s also possible that I’ll write original posts on one of these syndication sites without posting to one of my sites first.

Recently, when revamping my professional website I decided that I wanted to display a list recent posts from all of those sources. But because of the syndication, it was all a bit noisy: multiple copies of the same post, repeated titles, and a poor reading experience.

What I wanted was a single, clean feed — a unified view of everything I’ve written, without repetition.

So I wrote a tool.

The Problem

I wanted to:

  • Aggregate several feeds into one
  • Remove syndicated duplicates automatically
  • Prefer the canonical/original version of each post
  • Output the result in Atom (or optionally RSS or JSON)

The Solution: App::FeedDeduplicator

App::FeedDeduplicator is a new CPAN module and CLI tool for aggregating and deduplicating web feeds.

It reads a list of feed URLs from a JSON config file, downloads and parses them, filters out duplicates (based on canonical URLs or titles), sorts the results by date, and emits a clean, modern feed.

How It Works

  1. A JSON config file provides the list of feeds and the desired output format:
    {
      "output_format": "json",
      "max_entries": 10,
      "feeds": [{
        "feed": "https://perlhacks.com/feed/",
        "web":  "https://perlhacks.com/",
        "name": "Perl Hacks"
      }, {
        "feed": "https://davecross.substack.com/feed",
        "web":  "https://davecross.substack.com/",
        "name": "Substack"
      }, {
        "feed": "https://blog.dave.org.uk/feed/",
        "web":  "https://blog.dave.org.uk/",
        "name": "Davblog"
      }, {
        "feed": "https://dev.to/feed/davorg",
        "web":  "https://dev.to/davorg",
        "name": "Dev.to"
      }, {
        "feed": "https://davorg.medium.com/feed",
        "web":  "https://davorg.medium.com/",
        "name": "Medium"
      }]
    }
  2. Each feed is fetched and parsed using XML::Feed
  3. For each entry, the linked page is scanned for a <link rel="canonical"> tag
  4. If found, that canonical URL is used to detect duplicates; if not, the entry’s title is used as a fallback
  5. Duplicates are discarded, keeping only one version (preferably canonical)
  6. The resulting list is sorted by date and emitted in Atom, RSS, or JSON

Installation and Usage

Install via CPAN:

cpanm App::FeedDeduplicator

Then run it with:

feed-deduplicator config.json

If no config file is specified, it will try the FEED_DEDUP_CONFIG environment variable or fallback to ~/.feed-deduplicator/config.json.

There’s also a Docker image with the latest version installed.

Under the Hood

The tool is written in Perl 5.38+ and uses the new class feature (perlclass) for a cleaner OO structure:

  • App::FeedDeduplicator::Aggregator handles feed downloading and parsing
  • App::FeedDeduplicator::Deduplicator detects and removes duplicates
  • App::FeedDeduplicator::Publisher generates the final output

What’s Next?

It’s all very much a work in progress at the moment. It works for me, but there are bound to be some improvements needed, so it works for more people. A few things I already know I want to improve:

  • Add a configuration option for the LWP::Useragent agent identifier string
  • Add configuration options for the fixed elements of the generated web feed (name, link and things like that)
  • Add a per-feed limit for the number of entries published (I can see a use case where someone wants to publish a single entry from each feed)
  • Some kind of configuration template for the JSON version of the output

Try It Out

If you want a clean, single-source feed that represents your writing without duplication, App::FeedDeduplicator might be just what you need.

I’m using it now to power the aggregated feed on my site. Let me know what you think!

The post Cleaner web feed aggregation with App::FeedDeduplicator first appeared on Perl Hacks.

I write blog posts in a number of different places:

  • Davblog has been my general blog for about twenty years
  • Perl Hacks is where I write about Perl
  • My Substack newsletter is mostly tech stuff but can also wander into entrepreneurship and other topics

And most of those posts get syndicated to other places:

  • Tech stuff will usually end up on dev.to
  • Non-tech stuff will go to Medium
  • Occasionally, stuff about Perl will be republished on perl.com

It’s also possible that I’ll write original posts on one of these syndication sites without posting to one of my sites first.

Recently, when revamping my professional website I decided that I wanted to display a list recent posts from all of those sources. But because of the syndication, it was all a bit noisy: multiple copies of the same post, repeated titles, and a poor reading experience.

What I wanted was a single, clean feed — a unified view of everything I’ve written, without repetition.

So I wrote a tool.

The Problem

I wanted to:

  • Aggregate several feeds into one
  • Remove syndicated duplicates automatically
  • Prefer the canonical/original version of each post
  • Output the result in Atom (or optionally RSS or JSON)

The Solution: App::FeedDeduplicator

App::FeedDeduplicator is a new CPAN module and CLI tool for aggregating and deduplicating web feeds.

It reads a list of feed URLs from a JSON config file, downloads and parses them, filters out duplicates (based on canonical URLs or titles), sorts the results by date, and emits a clean, modern feed.

How It Works

  1. A JSON config file provides the list of feeds and the desired output format:
{
  "output_format": "json",
  "max_entries": 10,
  "feeds": [{
    "feed": "https://perlhacks.com/feed/",
    "web": "https://perlhacks.com/",
    "name": "Perl Hacks"
  }, {
    "feed": "https://davecross.substack.com/feed",
    "web": "https://davecross.substack.com/",
    "name": "Substack"
  }, {
    "feed": "https://blog.dave.org.uk/feed/",
    "web": "https://blog.dave.org.uk/",
    "name": "Davblog"
  }, {
    "feed": "https://dev.to/feed/davorg",
    "web": "https://dev.to/davorg",
    "name": "Dev.to"
  }, {
    "feed": "https://davorg.medium.com/feed",
    "web": "https://davorg.medium.com/",
    "name": "Medium"
  }]
}
  1. Each feed is fetched and parsed using XML::Feed
  2. For each entry, the linked page is scanned for a <link rel="canonical"> tag
  3. If found, that canonical URL is used to detect duplicates; if not, the entry’s title is used as a fallback
  4. Duplicates are discarded, keeping only one version (preferably canonical)
  5. The resulting list is sorted by date and emitted in Atom, RSS, or JSON

Installation and Usage

Install via CPAN:

cpanm App::FeedDeduplicator

Then run it with:

feed-deduplicator config.json

If no config file is specified, it will try the FEED_DEDUP_CONFIG environment variable or fallback to ~/.feed-deduplicator/config.json.

There’s also a Docker image with the latest version installed.

Under the Hood

The tool is written in Perl 5.38+ and uses the new class feature (perlclass) for a cleaner OO structure:

  • App::FeedDeduplicator::Aggregator handles feed downloading and parsing
  • App::FeedDeduplicator::Deduplicator detects and removes duplicates
  • App::FeedDeduplicator::Publisher generates the final output

What’s Next?

It’s all very much a work in progress at the moment. It works for me, but there are bound to be some improvements needed, so it works for more people. A few things I already know I want to improve:

  • Add a configuration option for the LWP::Useragent agent identifier string
  • Add configuration options for the fixed elements of the generated web feed (name, link and things like that)
  • Add a per-feed limit for the number of entries published (I can see a use case where someone wants to publish a single entry from each feed)
  • Some kind of configuration template for the JSON version of the output

Try It Out

If you want a clean, single-source feed that represents your writing without duplication, App::FeedDeduplicator might be just what you need.

I’m using it now to power the aggregated feed on my site. Let me know what you think!

The post Cleaner web feed aggregation with App::FeedDeduplicator first appeared on Perl Hacks.

The Bible Says So: What We Get Right (and Wrong) About Scripture’s Most Controversial Issues
author: Daniel McClellan
name: David
average rating: 4.61
book published:
rating: 0
read at:
date added: 2025/05/06
shelves: currently-reading
review:

Last week, I wrote a blog post about how I gave new life to an old domain by building a new website to live on that domain. With help from ChatGPT, it only took a few hours to build the site. While I’ll be adding new businesses and events to the site over time, that is currently a manual process and the site is mostly static.

This week, I wanted to take things a bit further. I wanted to build a site that was updated daily – but without any input from me.

Whois tells me I first registered cool-stuff.co.uk in September 1997. It was one of the first domains I registered. It has hosted a couple of very embarrassing early sites that I built, and for a while, it served as the email domain for several members of my family. But since they all moved to GMail, it’s been pretty much dormant. What it has never hosted is what I originally registered it for – a directory of cool things on the world wide web. So that’s what I decided to build.

So here’s the plan:

  • A very simple website
  • Each day it features a cool website – just the name, a link and a simple description
  • An archive page showing previously featured sites
  • Auto-generated each day with no manual intervention from me

I decided to stick with Jekyll and Minimal Mistakes as I enjoyed using them to build Balham.org. They make it easy to spin up a good-looking website, but they also have ways to add complexity when required. That complexity wasn’t needed here.

The site itself was very simple. It’s basically driven from a YAML file called coolstuff.yml which lists the sites we’ve featured. From that, we build a front page which features a new site every day and an archive page which lists all the previous sites we have featured. Oh, and we also have an RSS feed of the sites we feature. This is all pretty basic stuff.

As you’d expect from one of my projects, the site is hosted on GitHub Pages and is updated automatically using GitHub Actions.

It’s in GitHub Actions where the clever (not really all that clever – just new to me) stuff happens. There’s a workflow called update-coolstuff.yml which runs at 02:00 every morning and adds a new site. And it does that by asking ChatGPT to recommend a site. Here’s the workflow:

name: Update Cool Stuff

on:
  schedule:
    - cron: '0 2 * * *' # Runs at 2 AM UTC
  workflow_dispatch:

jobs:
  update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Perl dependencies
        run: |
          sudo apt-get update && sudo apt-get install -y cpanminus
          cpanm -n --sudo OpenAPI::Client::OpenAI YAML JSON::MaybeXS

      - name: Get a cool website from OpenAI
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          perl .github/scripts/fetch_cool_site

      - name: Commit and push if changed
        run: |
          git config user.name "github-actions"
          git config user.email "github-actions@github.com"
          git add docs/_data/coolstuff.yml
          git diff --cached --quiet || git commit -m "Add new cool site"
          git push

There’s not much clever going on there. I needed to ensure I had an OpenAI subscription with credit in the account (this is going to cost a tiny amount of money to run – I’m making one request a day!), and I set up the API key as a secret in the repo (with the name “OPENAI_API_KEY).

The magic all happens in the “fetch_cool_site” program. So let’s look at that next:

#!/usr/bin/env perl

use strict;
use warnings;

use builtin qw[trim];

use OpenAPI::Client::OpenAI;
use YAML qw(LoadFile DumpFile);
use Time::Piece;
use JSON::MaybeXS;

my $api_key = $ENV{"OPENAI_API_KEY"} or die "OPENAI_API_KEY is not set\n";

my $client = OpenAPI::Client::OpenAI->new;

my $prompt = join " ",
  "Suggest a really cool, creative, or fun website to feature today on a site called 'Cool Stuff'.",
  "Just return the name, URL, and a one-paragraph description of why it's cool. Only return one site.",
  "The URL should just be the URL itself. Do not wrap it in Markdown.";

my $res = $client->createChatCompletion({
  body => {
    model => 'gpt-4o',
    messages => [
      { role => 'system', content => 'You are a helpful curator of awesome websites.' },
      { role => 'user', content => $prompt },
    ],
    temperature => 1.0,
  }
});

my $text = $res->res->json->{choices}[0]{message}{content};
my @lines = split /\n/, $text;

my ($name, $url, @desc) = @lines;
$name =~ s/^\*\s*//;
my $description = join ' ', @desc;

my $new_entry = {
  date => localtime->ymd,
  name => trim($name),
  url  => trim($url),
  description => trim($description),
};

my $file = "docs/_data/coolstuff.yml";
my $entries = LoadFile($file);

unless (grep { $_->{url} eq $new_entry->{url} } @$entries) {
  push @$entries, $new_entry;
  DumpFile($file, $entries);
}

We’re using OpenAPI::Client::OpenAI to talk to the OpenAI API. From my limited knowledge, that seems to be the best option, currently. But I’m happy to be pointed to better suggestions.

Most of the code is copied from the examples in the module’s distribution. And the parsing of the response is probably a bit fragile. I expect I could tweak the prompt a bit to get the data back in a slightly more robust format.

But it works as it is. This morning I woke up and found a new site featured on the front page. So, rather than spend time tweaking exactly how it works, I thought it would be a good idea to get a blog post out there, so other people can see how easy it is to use ChatGPT in this way.

What do you think? Can you see ways that you’d like to include ChatGPT responses in some of your code?

The website is live at cool-stuff.co.uk.

The post Finding cool stuff with ChatGPT first appeared on Perl Hacks.

Last week, I wrote a blog post about how I gave new life to an old domain by building a new website to live on that domain. With help from ChatGPT, it only took a few hours to build the site. While I’ll be adding new businesses and events to the site over time, that is currently a manual process and the site is mostly static.

This week, I wanted to take things a bit further. I wanted to build a site that was updated daily – but without any input from me.

Whois tells me I first registered cool-stuff.co.uk in September 1997. It was one of the first domains I registered. It has hosted a couple of very embarrassing early sites that I built, and for a while, it served as the email domain for several members of my family. But since they all moved to GMail, it’s been pretty much dormant. What it has never hosted is what I originally registered it for – a directory of cool things on the world wide web. So that’s what I decided to build.

So here’s the plan:

  • A very simple website
  • Each day it features a cool website – just the name, a link and a simple description
  • An archive page showing previously featured sites
  • Auto-generated each day with no manual intervention from me

I decided to stick with Jekyll and Minimal Mistakes as I enjoyed using them to build Balham.org. They make it easy to spin up a good-looking website, but they also have ways to add complexity when required. That complexity wasn’t needed here.

The site itself was very simple. It’s basically driven from a YAML file called coolstuff.yml which lists the sites we’ve featured. From that, we build a front page which features a new site every day and an archive page which lists all the previous sites we have featured. Oh, and we also have an RSS feed of the sites we feature. This is all pretty basic stuff.

As you’d expect from one of my projects, the site is hosted on GitHub Pages and is updated automatically using GitHub Actions.

It’s in GitHub Actions where the clever (not really all that clever – just new to me) stuff happens. There’s a workflow called update-coolstuff.yml which runs at 02:00 every morning and adds a new site. And it does that by asking ChatGPT to recommend a site. Here’s the workflow:

name: Update Cool Stuff

on:
  schedule:
    - cron: '0 2 * * *' # Runs at 2 AM UTC
  workflow_dispatch:

jobs:
  update:
    runs-on: ubuntu-latest
    steps:
      - uses: actions/checkout@v4

      - name: Install Perl dependencies
        run: |
          sudo apt-get update && sudo apt-get install -y cpanminus
          cpanm -n --sudo OpenAPI::Client::OpenAI YAML JSON::MaybeXS

      - name: Get a cool website from OpenAI
        env:
          OPENAI_API_KEY: ${{ secrets.OPENAI_API_KEY }}
        run: |
          perl .github/scripts/fetch_cool_site

      - name: Commit and push if changed
        run: |
          git config user.name "github-actions"
          git config user.email "github-actions@github.com"
          git add docs/_data/coolstuff.yml
          git diff --cached --quiet || git commit -m "Add new cool site"
          git push

There’s not much clever going on there. I needed to ensure I had an OpenAI subscription with credit in the account (this is going to cost a tiny amount of money to run – I’m making one request a day!), and I set up the API key as a secret in the repo (with the name “OPENAI_API_KEY).

The magic all happens in the “fetch_cool_site” program. So let’s look at that next:

#!/usr/bin/env perl

use strict;
use warnings;

use builtin qw[trim];

use OpenAPI::Client::OpenAI;
use YAML qw(LoadFile DumpFile);
use Time::Piece;
use JSON::MaybeXS;

my $api_key = $ENV{"OPENAI_API_KEY"} or die "OPENAI_API_KEY is not set\n";

my $client = OpenAPI::Client::OpenAI->new;

my $prompt = join " ",
  "Suggest a really cool, creative, or fun website to feature today on a site called 'Cool Stuff'.",
  "Just return the name, URL, and a one-paragraph description of why it's cool. Only return one site.",
  "The URL should just be the URL itself. Do not wrap it in Markdown.";

my $res = $client->createChatCompletion({
  body => {
    model => 'gpt-4o',
    messages => [
      { role => 'system', content => 'You are a helpful curator of awesome websites.' },
      { role => 'user', content => $prompt },
    ],
    temperature => 1.0,
  }
});

my $text = $res->res->json->{choices}[0]{message}{content};
my @lines = split /\n/, $text;

my ($name, $url, @desc) = @lines;
$name =~ s/^\*\s*//;
my $description = join ' ', @desc;

my $new_entry = {
  date => localtime->ymd,
  name => trim($name),
  url => trim($url),
  description => trim($description),
};

my $file = "docs/_data/coolstuff.yml";
my $entries = LoadFile($file);

unless (grep { $_->{url} eq $new_entry->{url} } @$entries) {
  push @$entries, $new_entry;
  DumpFile($file, $entries);
}

We’re using OpenAPI::Client::OpenAI to talk to the OpenAI API. From my limited knowledge, that seems to be the best option, currently. But I’m happy to be pointed to better suggestions.

Most of the code is copied from the examples in the module’s distribution. And the parsing of the response is probably a bit fragile. I expect I could tweak the prompt a bit to get the data back in a slightly more robust format.

But it works as it is. This morning I woke up and found a new site featured on the front page. So, rather than spend time tweaking exactly how it works, I thought it would be a good idea to get a blog post out there, so other people can see how easy it is to use ChatGPT in this way.

What do you think? Can you see ways that you’d like to include ChatGPT responses in some of your code?

The website is live at cool-stuff.co.uk.

The post Finding cool stuff with ChatGPT first appeared on Perl Hacks.

Building a website in a day — with help from ChatGPT

A few days ago, I looked at an unused domain I owned — balham.org — and thought: “There must be a way to make this useful… and maybe even make it pay for itself.”

So I set myself a challenge: one day to build something genuinely useful. A site that served a real audience (people in and around Balham), that was fun to build, and maybe could be turned into a small revenue stream.

It was also a great excuse to get properly stuck into Jekyll and the Minimal Mistakes theme — both of which I’d dabbled with before, but never used in anger. And, crucially, I wasn’t working alone: I had ChatGPT as a development assistant, sounding board, researcher, and occasional bug-hunter.

The Idea

Balham is a reasonably affluent, busy part of south west London. It’s full of restaurants, cafés, gyms, independent shops, and people looking for things to do. It also has a surprisingly rich local history — from Victorian grandeur to Blitz-era tragedy.

I figured the site could be structured around three main pillars:

Throw in a curated homepage and maybe a blog later, and I had the bones of a useful site. The kind of thing that people would find via Google or get sent a link to by a friend.

The Stack

I wanted something static, fast, and easy to deploy. My toolchain ended up being:

  • Jekyll for the site generator
  • Minimal Mistakes as the theme
  • GitHub Pages for hosting
  • Custom YAML data files for businesses and events
  • ChatGPT for everything from content generation to Liquid loops

The site is 100% static, with no backend, no databases, no CMS. It builds automatically on GitHub push, and is entirely hosted via GitHub Pages.

Step by Step: Building It

I gave us about six solid hours to build something real. Here’s what we did (“we” meaning me + ChatGPT):

1. Domain Setup and Scaffolding

The domain was already pointed at GitHub Pages, and I had a basic “Hello World” site in place. We cleared that out, set up a fresh Jekyll repo, and added a _config.yml that pointed at the Minimal Mistakes remote theme. No cloning or submodules.

2. Basic Site Structure

We decided to create four main pages:

We used the layout: single layout provided by Minimal Mistakes, and created custom permalinks so URLs were clean and extension-free.

3. The Business Directory

This was built from scratch using a YAML data file (_data/businesses.yml). ChatGPT gathered an initial list of 20 local businesses (restaurants, shops, pubs, etc.), checked their status, and added details like name, category, address, website, and a short description.

In the template, we looped over the list, rendered sections with conditional logic (e.g., don’t output the website link if it’s empty), and added anchor IDs to each entry so we could link to them directly from the homepage.

4. The Events Page

Built exactly the same way, but using _data/events.yml. To keep things realistic, we seeded a small number of example events and included a note inviting people to email us with new submissions.

5. Featured Listings

We wanted the homepage to show a curated set of businesses and events. So we created a third data file, _data/featured.yml, which just listed the names of the featured entries. Then in the homepage template, we used where and slugify to match names and pull in the full record from businesses.yml or events.yml. Super DRY.

6. Map and Media

We added a map of Balham as a hero image, styled responsively. Later we created a .responsive-inline-image class to embed supporting images on the history page without overwhelming the layout.

7. History Section with Real Archival Images

This turned out to be one of the most satisfying parts. We wrote five paragraphs covering key moments in Balham’s development — Victorian expansion, Du Cane Court, The Priory, the Blitz, and modern growth.

Then we sourced five CC-licensed or public domain images (from Wikimedia Commons and Geograph) to match each paragraph. Each was wrapped in a <figure> with proper attribution and a consistent CSS class. The result feels polished and informative.

8. Metadata, SEO, and Polish

We went through all the basics:

  • Custom title and description in front matter for each page
  • Open Graph tags and Twitter cards via site config
  • A branded favicon using RealFaviconGenerator
  • Added robots.txt, sitemap.xml, and a hand-crafted humans.txt
  • Clean URLs, no .html extensions
  • Anchored IDs for deep linking

9. Analytics and Search Console

We added GA4 tracking using Minimal Mistakes’ built-in support, and verified the domain with Google Search Console. A sitemap was submitted, and indexing kicked in within minutes.

10. Accessibility and Performance

We ran Lighthouse and WAVE tests. Accessibility came out at 100%. Performance dipped slightly due to Google Fonts and image size, but we did our best to optimise without sacrificing aesthetics.

11. Footer CTA

We added a site-wide footer call-to-action inviting people to email us with suggestions for businesses or events. This makes the site feel alive and participatory, even without a backend form.

What Worked Well

  • ChatGPT as co-pilot: I could ask it for help with Liquid templates, CSS, content rewrites, and even bug-hunting. It let me move fast without getting bogged down in docs.
  • Minimal Mistakes: It really is an excellent theme. Clean, accessible, flexible.
  • Data-driven content: Keeping everything in YAML meant templates stayed simple, and the whole site is easy to update.
  • Staying focused: We didn’t try to do everything. Four pages, one day, good polish.

What’s Next?

  • Add category filtering to the directory
  • Improve the OG/social card image
  • Add structured JSON-LD for individual events and businesses
  • Explore monetisation: affiliate links, sponsored listings, local partnerships
  • Start some blog posts or “best of Balham” roundups

Final Thoughts

This started as a fun experiment: could I monetise an unused domain and finally learn Jekyll properly?

What I ended up with is a genuinely useful local resource — one that looks good, loads quickly, and has room to grow.

If you’re sitting on an unused domain, and you’ve got a free day and a chatbot at your side — you might be surprised what you can build.

Oh, and one final thing — obviously you can also get ChatGPT to write a blog post talking about the project :-)

Originally published at https://blog.dave.org.uk on March 23, 2025.

A few days ago, I looked at an unused domain I owned — balham.org — and thought: “There must be a way to make this useful… and maybe even make it pay for itself.”

So I set myself a challenge: one day to build something genuinely useful. A site that served a real audience (people in and around Balham), that was fun to build, and maybe could be turned into a small revenue stream.

It was also a great excuse to get properly stuck into Jekyll and the Minimal Mistakes theme — both of which I’d dabbled with before, but never used in anger. And, crucially, I wasn’t working alone: I had ChatGPT as a development assistant, sounding board, researcher, and occasional bug-hunter.

The Idea

Balham is a reasonably affluent, busy part of south west London. It’s full of restaurants, cafés, gyms, independent shops, and people looking for things to do. It also has a surprisingly rich local history — from Victorian grandeur to Blitz-era tragedy.

I figured the site could be structured around three main pillars:

  • A directory of local businesses
  • A list of upcoming events
  • A local history section

Throw in a curated homepage and maybe a blog later, and I had the bones of a useful site. The kind of thing that people would find via Google or get sent a link to by a friend.

The Stack

I wanted something static, fast, and easy to deploy. My toolchain ended up being:

  • Jekyll for the site generator
  • Minimal Mistakes as the theme
  • GitHub Pages for hosting
  • Custom YAML data files for businesses and events
  • ChatGPT for everything from content generation to Liquid loops

The site is 100% static, with no backend, no databases, no CMS. It builds automatically on GitHub push, and is entirely hosted via GitHub Pages.

Step by Step: Building It

I gave us about six solid hours to build something real. Here’s what we did (“we” meaning me + ChatGPT):

1. Domain Setup and Scaffolding

The domain was already pointed at GitHub Pages, and I had a basic “Hello World” site in place. We cleared that out, set up a fresh Jekyll repo, and added a _config.yml that pointed at the Minimal Mistakes remote theme. No cloning or submodules.

2. Basic Site Structure

We decided to create four main pages:

  • Homepage (index.md)
  • Directory (directory/index.md)
  • Events (events/index.md)
  • History (history/index.md)

We used the layout: single layout provided by Minimal Mistakes, and created custom permalinks so URLs were clean and extension-free.

3. The Business Directory

This was built from scratch using a YAML data file (_data/businesses.yml). ChatGPT gathered an initial list of 20 local businesses (restaurants, shops, pubs, etc.), checked their status, and added details like name, category, address, website, and a short description.

In the template, we looped over the list, rendered sections with conditional logic (e.g., don’t output the website link if it’s empty), and added anchor IDs to each entry so we could link to them directly from the homepage.

4. The Events Page

Built exactly the same way, but using _data/events.yml. To keep things realistic, we seeded a small number of example events and included a note inviting people to email us with new submissions.

5. Featured Listings

We wanted the homepage to show a curated set of businesses and events. So we created a third data file, _data/featured.yml, which just listed the names of the featured entries. Then in the homepage template, we used where and slugify to match names and pull in the full record from businesses.yml or events.yml. Super DRY.

6. Map and Media

We added a map of Balham as a hero image, styled responsively. Later we created a .responsive-inline-image class to embed supporting images on the history page without overwhelming the layout.

7. History Section with Real Archival Images

This turned out to be one of the most satisfying parts. We wrote five paragraphs covering key moments in Balham’s development — Victorian expansion, Du Cane Court, The Priory, the Blitz, and modern growth.

Then we sourced five CC-licensed or public domain images (from Wikimedia Commons and Geograph) to match each paragraph. Each was wrapped in a <figure> with proper attribution and a consistent CSS class. The result feels polished and informative.

8. Metadata, SEO, and Polish

We went through all the basics:

  • Custom title and description in front matter for each page
  • Open Graph tags and Twitter cards via site config
  • A branded favicon using RealFaviconGenerator
  • Added robots.txt, sitemap.xml, and a hand-crafted humans.txt
  • Clean URLs, no .html extensions
  • Anchored IDs for deep linking

9. Analytics and Search Console

We added GA4 tracking using Minimal Mistakes’ built-in support, and verified the domain with Google Search Console. A sitemap was submitted, and indexing kicked in within minutes.

10. Accessibility and Performance

We ran Lighthouse and WAVE tests. Accessibility came out at 100%. Performance dipped slightly due to Google Fonts and image size, but we did our best to optimise without sacrificing aesthetics.

11. Footer CTA

We added a site-wide footer call-to-action inviting people to email us with suggestions for businesses or events. This makes the site feel alive and participatory, even without a backend form.

What Worked Well

  • ChatGPT as co-pilot: I could ask it for help with Liquid templates, CSS, content rewrites, and even bug-hunting. It let me move fast without getting bogged down in docs.
  • Minimal Mistakes: It really is an excellent theme. Clean, accessible, flexible.
  • Data-driven content: Keeping everything in YAML meant templates stayed simple, and the whole site is easy to update.
  • Staying focused: We didn’t try to do everything. Four pages, one day, good polish.

What’s Next?

  • Add category filtering to the directory
  • Improve the OG/social card image
  • Add structured JSON-LD for individual events and businesses
  • Explore monetisation: affiliate links, sponsored listings, local partnerships
  • Start some blog posts or “best of Balham” roundups

Final Thoughts

This started as a fun experiment: could I monetise an unused domain and finally learn Jekyll properly?

What I ended up with is a genuinely useful local resource — one that looks good, loads quickly, and has room to grow.

If you’re sitting on an unused domain, and you’ve got a free day and a chatbot at your side — you might be surprised what you can build.


Oh, and one final thing – obviously you can also get ChatGPT to write a blog post talking about the project :-)

The post Building a website in a day — with help from ChatGPT appeared first on Davblog.

I built and launched a new website yesterday. It wasn’t what I planned to do, but the idea popped into my head while I was drinking my morning coffee on Clapham Common and it seemed to be the kind of thing I could complete in a day — so I decided to put my original plans on hold and built it instead.

The website is aimed at small business owners who think they need a website (or want to update their existing one) but who know next to nothing about web development and can easily fall prey to the many cowboy website companies that seem to dominate the “making websites for small companies” section of our industries. The site is structured around a number of questions you can ask a potential website builder to try and weed out the dodgier elements.

I’m not really in that sector of our industry. But while writing the content for that site, it occurred to me that some people might be interested in the tools I use to build sites like this.

Content

I generally build websites about topics that I’m interested in and, therefore, know a fair bit about. But I probably don’t know everything about these subjects. So I’ll certainly brainstorm some ideas with ChatGPT. And, once I’ve written something, I’ll usually run it through ChatGPT again to proofread it. I consider myself a pretty good writer, but it’s embarrassing how often ChatGPT catches obvious errors.

I’ve used DALL-E (via ChatGPT) for a lot of image generation. This weekend, I subscribed to Midjourney because I heard it was better at generating images that include text. So far, that seems to be accurate.

Technology

I don’t write much raw HTML these days. I’ll generally write in Markdown and use a static site generator to turn that into a real website. This weekend I took the easy route and used Jekyll with the Minimal Mistakes theme. Honestly, I don’t love Jekyll, but it integrates well with GitHub Pages and I can usually get it to do what I want — with a combination of help from ChatGPT and reading the source code. I’m (slowly) building my own Static Site Generator ( Aphra) in Perl. But, to be honest, I find that when I use it I can easily get distracted by adding new features rather than getting the site built.

As I’ve hinted at, if I’m building a static site (and, it’s surprising how often that’s the case), it will be hosted on GitHub Pages. It’s not really aimed at end-users, but I know to you use it pretty well now. This weekend, I used the default mechanism that regenerates the site (using Jekyll) on every commit. But if I’m using Aphra or a custom site generator, I know I can use GitHub Actions to build and deploy the site.

If I’m writing actual HTML, then I’m old-skool enough to still use Bootstrap for CSS. There’s probably something better out there now, but I haven’t tried to work out what it is (feel free to let me know in the comments).

For a long while, I used jQuery to add Javascript to my pages — until someone was kind enough to tell me that vanilla Javascript had mostly caught up and jQuery was no longer necessary. I understand Javascript. And with help from GitHub Copilot, I can usually get it doing what I want pretty quickly.

SEO

Many years ago, I spent a couple of years working in the SEO group at Zoopla. So, now, I can’t think about building a website without considering SEO.

I quickly lose interest in the content side of SEO. Figuring out what my keywords are and making sure they’re scattered through the content at the correct frequency, feels like it stifles my writing (maybe that’s an area where ChatGPT can help) but I enjoy Technical SEO. So I like to make sure that all of my pages contain the correct structured data (usually JSON-LD). I also like to ensure my sites all have useful OpenGraph headers. This isn’t really SEO, I guess, but these headers control what people see when they share content on social media. So by making that as attractive as possible (a useful title and description, an attractive image) it encourages more sharing, which increases your site’s visibility and, in around about way, improves SEO.

I like to register all of my sites with Ahrefs — they will crawl my sites periodically and send me a long list of SEO improvements I can make.

Monitoring

I add Google Analytics to all of my sites. That’s still the best way to find out how popular your site it and where your traffic is coming from. I used to be quite proficient with Universal Analytics, but I must admit I haven’t fully got the hang of Google Analytics 4 yet-so I’m probably only scratching the surface of what it can do.

I also register all of my sites with Google Search Console. That shows me information about how my site appears in the Google Search Index. I also link that to Google Analytics — so GA also knows what searches brought people to my sites.

Conclusion

I think that covers everything-though I’ve probably forgotten something. It might sound like a lot, but once you get into a rhythm, adding these extra touches doesn’t take long. And the additional insights you gain make it well worth the effort.

If you’ve built a website recently, I’d love to hear about your approach. What tools and techniques do you swear by? Are there any must-have features or best practices I’ve overlooked? Drop a comment below or get in touch-I’m always keen to learn new tricks and refine my process. And if you’re a small business owner looking for guidance on choosing a web developer, check out my new site-it might just save you from a costly mistake!

Originally published at https://blog.dave.org.uk on March 16, 2025.

I built and launched a new website yesterday. It wasn’t what I planned to do, but the idea popped into my head while I was drinking my morning coffee on Clapham Common and it seemed to be the kind of thing I could complete in a day – so I decided to put my original plans on hold and built it instead.

The website is aimed at small business owners who think they need a website (or want to update their existing one) but who know next to nothing about web development and can easily fall prey to the many cowboy website companies that seem to dominate the “making websites for small companies” section of our industries. The site is structured around a number of questions you can ask a potential website builder to try and weed out the dodgier elements.

I’m not really in that sector of our industry. But while writing the content for that site, it occurred to me that some people might be interested in the tools I use to build sites like this.

Content

I generally build websites about topics that I’m interested in and, therefore, know a fair bit about. But I probably don’t know everything about these subjects. So I’ll certainly brainstorm some ideas with ChatGPT. And, once I’ve written something, I’ll usually run it through ChatGPT again to proofread it. I consider myself a pretty good writer, but it’s embarrassing how often ChatGPT catches obvious errors.

I’ve used DALL-E (via ChatGPT) for a lot of image generation. This weekend, I subscribed to Midjourney because I heard it was better at generating images that include text. So far, that seems to be accurate.

Technology

I don’t write much raw HTML these days. I’ll generally write in Markdown and use a static site generator to turn that into a real website. This weekend I took the easy route and used Jekyll with the Minimal Mistakes theme. Honestly, I don’t love Jekyll, but it integrates well with GitHub Pages and I can usually get it to do what I want – with a combination of help from ChatGPT and reading the source code. I’m (slowly) building my own Static Site Generator (Aphra) in Perl. But, to be honest, I find that when I use it I can easily get distracted by adding new features rather than getting the site built.

As I’ve hinted at, if I’m building a static site (and, it’s surprising how often that’s the case), it will be hosted on GitHub Pages. It’s not really aimed at end-users, but I know how to use it pretty well now. This weekend, I used the default mechanism that regenerates the site (using Jekyll) on every commit. But if I’m using Aphra or a custom site generator, I know I can use GitHub Actions to build and deploy the site.

If I’m writing actual HTML, then I’m old-skool enough to still use Bootstrap for CSS. There’s probably something better out there now, but I haven’t tried to work out what it is (feel free to let me know in the comments).

For a long while, I used jQuery to add Javascript to my pages – until someone was kind enough to tell me that vanilla Javascript had mostly caught up and jQuery was no longer necessary. I understand Javascript. And with help from GitHub Copilot, I can usually get it doing what I want pretty quickly.

SEO

Many years ago, I spent a couple of years working in the SEO group at Zoopla. So, now, I can’t think about building a website without considering SEO.

I quickly lose interest in the content side of SEO. Figuring out what my keywords are and making sure they’re scattered through the content at the correct frequency, feels like it stifles my writing (maybe that’s an area where ChatGPT can help) but I enjoy Technical SEO. So I like to make sure that all of my pages contain the correct structured data (usually JSON-LD). I also like to ensure my sites all have useful OpenGraph headers. This isn’t really SEO, I guess, but these headers control what people see when they share content on social media. So by making that as attractive as possible (a useful title and description, an attractive image) it encourages more sharing, which increases your site’s visibility and, in around about way, improves SEO.

I like to register all of my sites with Ahrefs – they will crawl my sites periodically and send me a long list of SEO improvements I can make.

Monitoring

I add Google Analytics to all of my sites. That’s still the best way to find out how popular your site it and where your traffic is coming from. I used to be quite proficient with Universal Analytics, but I must admit I haven’t fully got the hang of Google Analytics 4 yet—so I’m probably only scratching the surface of what it can do.

I also register all of my sites with Google Search Console. That shows me information about how my site appears in the Google Search Index. I also link that to Google Analytics – so GA also knows what searches brought people to my sites.

Conclusion

I think that covers everything—though I’ve probably forgotten something. It might sound like a lot, but once you get into a rhythm, adding these extra touches doesn’t take long. And the additional insights you gain make it well worth the effort.

If you’ve built a website recently, I’d love to hear about your approach. What tools and techniques do you swear by? Are there any must-have features or best practices I’ve overlooked? Drop a comment below or get in touch—I’m always keen to learn new tricks and refine my process. And if you’re a small business owner looking for guidance on choosing a web developer, check out my new site—it might just save you from a costly mistake!

The post How I build websites in 2025 appeared first on Davblog.

I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five…

I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five free tickets and discounts on your tickets and snacks. I’ve often wondered whether it’s worth paying for, but in the last couple of years, they’ve added an extra feature that makes it well worth the cost. It’s called Film Club and every week they have two curated screenings that members can see for just £1. On Sunday lunchtime, there’s a screening of an older film, and on a weekday evening (usually Wednesday at the Clapham Picturehouse), they show something new. I’ve got into the habit of seeing most of these screenings.

For most of the year, I’ve been considering a monthly post about the films I’ve seen at Film Club, but I’ve never got around to it. So, instead, you get an end-of-year dump of the almost eighty films I’ve seen.

  1. Under the Skin [4 stars] 2024-01-14
    Starting with an old(ish) favourite. The last time I saw this was a free preview for Picturehouse members, ten years ago. It’s very much a film that people love or hate. I love it. The book is great too (but very different)
  2. Go West [3.5] 2024-01-21
    They often show old films as mini-festivals of connected films. This was the first of a short series of Buster Keaton films. I hadn’t seen any of them. Go West was a film where I could appreciate the technical aspects, but I wasn’t particularly entertained
  3. Godzilla Minus One [3] 2024-01-23
    Around this time, I’d been watching a few of the modern Godzilla films from the “Monsterverse”. I hadn’t really been enjoying them. But this, unrelated, film was far more enjoyable
  4. Steamboat Bill, Jr. [4] 2024-01-28
    Back with Buster Keaton. I enjoyed this one far more.
  5. American Fiction [4] 2024-01-30
    Sometimes they’ll show an Oscar contender. I ended up having seen seven of the ten Best Picture nominees before the ceremony – which is far higher than my usual rate. I really enjoyed this one
  6. The Zone of Interest [3] 2024-02-03
    Another Oscar contender. I think I wasn’t really in the mood for this. I was tired and found it hard to follow. I should rewatch it at some point.
  7. The General [4] 2024-02-11
    More Buster Keaton. I really enjoyed this one – my favourite of the three I watched. I could very easily see myself going down a rabbit hole of obsessing over all of his films
  8. Perfect Days [3.5] 2024-02-15
    A film about the life of a toilet cleaner in Tokyo. But written and directed by Wim Wenders – so far better than that description makes it sound
  9. Wicked Little Letters [4] 2024-02-20
    I thought this would be more popular than it was. But it vanished pretty much without a trace. It’s a really nice little film about swearing
  10. Nosferatu the Vampyre [3.5] 2024-02-25
    The Sunday screenings often give me a chance to catch up with old classics that I haven’t seen before. This was one example. This was the 1979 Werner Herzog version. I should track down the 1922 original before watching the new version early next year
  11. Four Daughters [3.5] 2024-02-29
    Because the screenings cost £1, I see everything – no matter what the subject matter is. This is an example of a film I probably wouldn’t have seen without Film Club. But it was a really interesting film about a Tunisian woman who lost two of her daughters when they joined Islamic State
  12. The Persian Version [3.5] 2024-03-07
    Another film that I would have missed out on without Film Club. It’s an interesting look into the lives of Iranians in America
  13. Girlhood [3] 2024-03-10
    This was the start of another short season of related films. This time it was films made by women about the lives of women and girls. This one was about girl gangs in Paris
  14. Still Walking [3] 2024-03-16
    A Japanese family get together to commemorate the death of the eldest son. Things happen, but nothing changes
  15. Zola [3.5] 2024-03-17
    I had never heard of this film before, but really enjoyed it. It’s the true story of a stripper who goes on a road trip to Florida and gets involved in… stuff
  16. Late Night with the Devil [3.5] 2024-03-19
    I thought this was clever. A horror film that takes place on the set of a late-night chat show. Things go horribly wrong
  17. Set It Off [3.5] 2024-03-24
    A pretty standard heist film. But the protagonists are a group of black women. I enjoyed it
  18. Disco Boy [2] 2024-03-27
    I really didn’t get this film at all
  19. Girls Trip [3.5] 2024-03-31
    Another women’s road trip film. It was fun, but I can’t remember much of it now
  20. The Salt of the Earth [3] 2024-04-07
    A documentary about the work of photographer Sebastião Salgado. He was in some bad wars and saw some bad shit
  21. The Teachers’ Lounge [3.5] 2024-04-10
    Another film that got an Oscar nod. A well-made drama about tensions in the staff room of a German school.
  22. Do the Right Thing [4] 2024-04-14
    I had never seen a Spike Lee film. How embarrassing is that? This was really good (but you all knew that)
  23. Sometimes I Think About Dying [3] 2024-04-17
    I really wanted to like this. It was well-made. Daisy Ridley is a really good actress. But it didn’t really go anywhere and completely failed to grip me
  24. The Trouble with Jessica [4] 2024-04-22
    Another film that deserved to be more successful than it was. Some great comedy performances by a strong cast.
  25. Rope [4.5] 2024-04-28
    A chance to see a favourite film on the big screen for the first time. It’s regarded as a classic for good reason
  26. Blackbird Blackbird Blackberry [3] 2024-04-30
    Another film that I just wouldn’t have considered if it wasn’t part of the Film Club programme. I had visited Tbilisi a year earlier, so it was interesting to see a film that was made in Georgia. But, ultimately, it didn’t really grip me
  27. The Cars That Ate Paris [3] 2024-05-12
    Another old classic that I had never seen. It’s a bit like a precursor to Mad Max. I’m glad I’ve seen it, but I won’t be rushing to rewatch it
  28. Victoria [3.5] 2024-05-19
    This was a lot of fun. The story of one night in the life of a Spanish woman living in Berlin. Lots of stuff happens. It’s over two hours long and was shot in a single, continuous take
  29. The Beast [3.5] 2024-05-22
    This was interesting. So interesting that I rewatched it when it appeared on Mubi a few months ago. I’m not sure I can explain it all, but I’ll be rewatching again at some point (and probably revising my score upwards)
  30. Eyes Wide Shut [4] 2024-05-26
    I hadn’t seen this for maybe twenty-five years. And I don’t think I ever saw it in a cinema. It’s better than I remember
  31. Rosalie [3.5] 2024-05-28
    A film about a bearded lady in 19th-century France. I kid you not. It’s good
  32. All About My Mother [3.5] 2024-06-02
    Years ago, I went through a phase of watching loads of Almodóvar films. I was sure I’d seen this one, but I didn’t remember it at all. It’s good though
  33. Àma Gloria [3] 2024-06-04
    I misunderstood the trailer for this and was on the edge of my seat throughout waiting for a disaster to happen. But, ultimately, it was a nice film about a young girl visiting her old nanny in Cape Verde
  34. Full Metal Jacket [3.5] 2024-06-09
    This really wasn’t as good as I remembered it. Everyone remembers the training camp stuff, but half of the film happens in-country – and that’s all rather dull
  35. Sasquatch Sunset [2] 2024-06-11
    I wanted to like this. It would have made a funny two-minute SNL sketch. But it really didn’t work when stretched to ninety minutes
  36. Being John Malkovich [4] 2024-06-16
    Still great
  37. Green Border [4] 2024-06-19
    A lot of the films I’ve seen at Film Club in previous years seem to be about people crossing borders illegally. This one was about the border between Belarus and Poland. It was very depressing – but very good
  38. Attack the Block [4] 2024-06-23
    Another old favourite that it was great to see on the big screen
  39. The 400 Blows [3] 2024-06-30
    The French New Wave is a huge hole in my knowledge of cinema, so I was glad to have an opportunity to start putting that right. This, however, really didn’t grip me
  40. Bye Bye Tiberias [2.5] 2024-07-02
    Hiam Abbass (who you might know as Marcia in Succession) left her native Palestine in the 80s to live in Paris. This is a documentary following a visit she made back to her family. It didn’t really go anywhere
  41. Breathless [3] 2024-07-07
    More French New Wave. I like this more than The 400 Blows – but not much more
  42. After Hours [4] 2024-07-13
    Another old favourite from  the 80s that I had never seen on the big screen. It’s still great
  43. What Ever Happened to Baby Jane? [2.5] 2024-07-14
    This was an object lesson in the importance of judging a film in its context. I know this is a great film, but watching it in the 21st century just didn’t have the impact that watching it in the early 60s would have had
  44. Crossing [3.5] 2024-07-16
    A Georgian woman travels to Istanbul to try to find her niece. We learn a lot about the gay and trans communities in the city. I enjoyed this a lot
  45. American Gigolo [3] 2024-07-28
    Something else that I had never seen. And, to be honest, I don’t think I had really missed much
  46. Dìdi (弟弟) [3.5] 2024-07-31
    Nice little story about a Taiwanese teen growing up in California
  47. I Saw the TV Glow [4] 2024-08-05
    I imagine this will be on many “best films of the year” lists. It’s a very strange film about two teens and their obsession with a TV show that closely resembles Buffy the Vampire Slayer.
  48. Hollywoodgate [2.5] 2024-08-13
    I really wanted to like this. An Egyptian filmmaker manages to get permission to film a Taliban group that takes over an American base in Afghanistan. But, ultimately, don’t let him film anything interesting and the film is a bit of a disappointment
  49. Beverly Hills Cop [1] 2024-08-18
    I had never seen this before. And I wish I still hadn’t. One of the worst films I’ve seen in a very long time
  50. Excalibur [4] 2024-08-25
    Another old favourite that I hadn’t seen on the big screen for a very long time. This is the film that gave me an obsession with watching any film that’s based on Arthurian legend, no matter how bad (and a lot of them are very, very bad)
  51. The Quiet Girl [3.5] 2024-09-01
    A young Irish girl is sent away to spend the summer with distant relations. She comes to realise that life doesn’t have to be as grim as it usually is for her
  52. Lee [3.5] 2024-09-04
    A really good biopic about the American photographer Lee Miller. Kate Winslet is really good as Miller
  53. The Queen of My Dreams [3.5] 2024-09-11
    Another film that I wouldn’t have seen without Film Club. A Canadian Pakistani lesbian woman visits Pakistan and learns about some of the cultural pressures that shaped her mother. It’s a lovely film
  54. My Own Private Idaho [2] 2024-09-15
    Another film that I had never seen before. Some nice acting by Keanu Reeves and River Phoenix, but this really didn’t interest me
  55. Girls Will Be Girls [3.5] 2024-09-17
    A coming-of-age film about a teenage girl in India. I enjoyed this
  56. The Shape of Water [3.5] 2024-09-22
    I don’t think I’ve seen this since the year it was released (and won the Best Picture Oscar). I still enjoyed it, but I didn’t think it held up as well as I expected it to
  57. The Banshees of Inisherin [3.5] 2024-09-29
    I’d seen this on TV, but you need to see it on a big screen to get the full effect. I’m sure you all know how good it is
  58. The Full Monty [3] 2024-10-06
    I never understood why this was so much more popular than Brassed Off which is, to me at least, a far better example of the “British worker fish out of water” genre (that’s not a genre, is it?) I guess it’s the soundtrack and the slightly Beryl Cook overtones – the British love a bit of smut
  59. Timestalker [2.5] 2024-10-08
    I really wanted to like this. But if just didn’t grab me. I’ll try it again at some point
  60. Nomadland [3] 2024-10-13
    Another Best Picture Oscar winner. And it’s another one where I can really see how important and well-made it is – but it just doesn’t do anything for me
  61. The Apprentice [4] 2024-10-17
    I don’t know why Trump was so against this film. I thought he came out of this far more positively than I expected. But it seemed to barely get a release. It has still picked up a few (well-deserved) nominations though
  62. Little Miss Sunshine [4] 2024-10-20
    Another old favourite. I loved seeing this again
  63. Stoker [3] 2024-10-27
    I had never seen this before. I can’t quite put my finger on it, but I didn’t really enjoy it
  64. Anora [4] 2024-10-29
    This was probably the best film I saw this year. Well, certainly the best new film. It’s getting a lot of awards buzz. I hope it does well
  65. (500) Days of Summer [4] 2024-11-03
    I don’t think I had seen this since soon after it was released. It was great to see it again
  66. Bird [3] 2024-11-05
    This was slightly strange. I’ve seen a few films about the grimness of life on council estates. But this one threw in a bit of magical realism that didn’t really work for me
  67. Sideways [3.5] 2024-11-10
    Another film I hadn’t watched for far too long
  68. Sunshine [4] 2024-11-17
    This is one of my favourite recent(ish) scifi films. I saw it on the Science Museum’s IMAX screen in 2023, but I wasn’t going to skip the chance to see it again
  69. Conclave [3.5] 2024-11-19
    Occasionally, this series gives you a chance to see something that’s going to be up for plenty of awards. This was a good example. I enjoyed it
  70. The Grand Budapest Hotel [4] 2024-11-24
    I’ve been slightly disappointed with a few recent Wes Anderson films, so it was great to have the opportunity to see one of his best back on the big screen
  71. The Universal Theory [4] 2024-11-26
    I knew nothing about this going into it. And it was a fabulous film. Mysteries and quantum physics in  the Swiss Alps. And all filmed in black and white. This didn’t get the coverage it deserved.
  72. Home Alone [2] 2024-12-08
    I thought I had never seen this before. But apparently I logged watching it many years ago. I know everyone loves it, but I couldn’t see the point
  73. The Apartment [4] 2024-12-15
    This was interesting. I have a background quest to watch all of the Best Picture Oscar winners and I hadn’t seen this one. I knew absolutely nothing about it. I thought it was really good
  74. The Taste of Things [3.5] 2024-12-21
    A film that I didn’t get to see earlier in the year. It’s largely about cooking in a late-nineteenth century French country kitchen. It would make an interesting watch alongside The Remains of the Day
  75. Christmas Eve in Miller’s Point [2] 2024-12-24
    I didn’t understand this at all. It went nowhere and said nothing interesting. A large family meets up for their traditional Christmas Eve. No-one enjoys themself
  76. La Chimera [2] 2024-12-29
    And finishing on a bit of a low. I don’t understand why this got so many good reviews. Maybe I just wasn’t in the right mood for it. Something about criminals looking for ancient relics in Italy

The post Picturehouse Film Club appeared first on Davblog.

Royal Titles Decoded: What Makes a Prince or Princess? — Line of Succession Blog

Letters Patent issued by George V in 1917

Royal titles in the United Kingdom carry a rich tapestry of history, embodying centuries of tradition while adapting to the changing landscape of the modern world. This article delves into the structure of these titles, focusing on significant changes made during the 20th and 21st centuries, and how these rules affect current royals.

The Foundations: Letters Patent of 1917

The framework for today’s royal titles was significantly shaped by the Letters Patent issued by King George V in 1917. This document was pivotal in redefining who in the royal family would be styled with “His or Her Royal Highness” (HRH) and as a prince or princess. Specifically, the 1917 Letters Patent restricted these styles to:

  • The sons and daughters of a sovereign.
  • The male-line grandchildren of a sovereign.
  • The eldest living son of the eldest son of the Prince of Wales.

This move was partly in response to the anti-German sentiment of World War I, aiming to streamline the monarchy and solidify its British identity by reducing the number of royals with German titles.

Notice that the definitions talk about “a sovereign”, not “the sovereign”. This means that when the sovereign changes, no-one will lose their royal title (for example, Prince Andrew is still the son of a sovereign, even though he is no longer the son of the sovereign). However, people can gain royal titles when the sovereign changes — we will see examples below.

Extension by George VI in 1948

Understanding the implications of the existing rules as his family grew, King George VI issued a new Letters Patent in 1948 to extend the style of HRH and prince/princess to the children of the future queen, Princess Elizabeth (later Queen Elizabeth II). This was crucial as, without this adjustment, Princess Elizabeth’s children would not automatically have become princes or princesses because they were not male-line grandchildren of the monarch. This ensured that Charles and Anne were born with princely status, despite being the female-line grandchildren of a monarch.

The Modern Adjustments: Queen Elizabeth II’s 2012 Update

Queen Elizabeth II’s update to the royal titles in 2012 before the birth of Prince William’s children was another significant modification. The Letters Patent of 2012 decreed that all the children of the eldest son of the Prince of Wales would hold the title of HRH and be styled as prince or princess, not just the eldest son. This move was in anticipation of changes brought about by the Succession to the Crown Act of 2013, which ended the system of male primogeniture, ensuring that the firstborn child of the Prince of Wales, regardless of gender, would be the direct heir to the throne. Without this change, there could have been a situation where Prince William’s first child (and the heir to the throne) was a daughter who wasn’t a princess, whereas her eldest (but younger) brother would have been a prince.

Impact on Current Royals

  • Children of Princess Anne: When Anne married Captain Mark Phillips in 1973, he was offered an earldom but declined it. Consequently, their children, Peter Phillips and Zara Tindall, were not born with any titles. This decision reflects Princess Anne’s preference for her children to have a more private life, albeit still active within the royal fold.
  • Children of Prince Edward: Initially, Prince Edward’s children were styled as children of an earl, despite his being a son of the sovereign. Recently, his son James assumed the courtesy title Earl of Wessex, which Prince Edward will inherit in due course from Prince Philip’s titles. His daughter, Lady Louise Windsor, was also styled in line with Edward’s wish for a lower-profile royal status for his children.
  • Children of Prince Harry: When Archie and Lilibet were born, they were not entitled to princely status or HRH. They were great-grandchildren of the monarch and, despite the Queen’s adjustments in 2012, their cousins — George, Charlotte and Louis — were the only great-grandchildren of the monarch with those titles. When their grandfather became king, they became male-line grandchildren of a monarch and, hence, a prince and a princess. It took a while for those changes to be reflected on the royal family website. This presumably gave the royal household time to reflect on the effect of the children’s parents withdrawing from royal life and moving to the USA.

Special Titles: Prince of Wales and Princess Royal

  • Prince of Wales: Historically granted to the heir apparent, this title is not automatic and needs to be specifically bestowed by the monarch. Prince Charles was created Prince of Wales in 1958, though he had been the heir apparent since 1952. Prince William, on the other hand, received the title in 2022 — just a day after the death of Queen Elizabeth II.
  • Princess Royal: This title is reserved for the sovereign’s eldest daughter but is not automatically reassigned when the previous holder passes away or when a new eldest daughter is born. Queen Elizabeth II was never Princess Royal because her aunt, Princess Mary, held the title during her lifetime. Princess Anne currently holds this title, having received it in 1987.

The Fade of Titles: Distant Royals

As the royal family branches out, descendants become too distanced from the throne, removing their entitlement to HRH and princely status. For example, the Duke of Gloucester, Duke of Kent, Prince Michael of Kent and Princess Alexandra all have princely status as male-line grandchildren of George V. Their children are all great-grandchildren of a monarch and, therefore, do not all have royal styles or titles. This reflects a natural trimming of the royal family tree, focusing the monarchy’s public role on those directly in line for succession.

Conclusion

The evolution of British royal titles reflects both adherence to deep-rooted traditions and responsiveness to modern expectations. These titles not only delineate the structure and hierarchy within the royal family but also adapt to changes in societal norms and the legal landscape, ensuring the British monarchy remains both respected and relevant in the contemporary era.

Originally published at https://blog.lineofsuccession.co.uk on April 25, 2024.


Royal Titles Decoded: What Makes a Prince or Princess? — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.

The view of the planet [AI-generated image]

Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.

There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.

Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.

Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.

She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.

Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.

An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.

The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.

Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.

And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.

My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.

That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.

Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.

Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.

Originally published at https://blog.dave.org.uk on April 7, 2024.

Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.

There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.

Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.

Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.

She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.

Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.

An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.

The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.

Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.

And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.

My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.

That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.

Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.

Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.

The post The Tourist appeared first on Davblog.

I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!

I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.

  • John Grant (supported by The Faultress) at St. James’s Church
    John Grant has become one of those artists I try to see whenever they pass through London. And this was a particularly special night as he was playing an acoustic set in one of the most atmospheric venues in London. The evening was only slightly marred by the fact I arrived too late to get a decent seat and ended up not being able to see anything.
  • Hannah Peel at Kings Place
    Hannah Peel was the artist in residence at Kings Place for a few months during the year and played three gigs during that time. This was the first of them – where she played her recent album, Fir Wave, in its entirety. A very laid-back and thoroughly enjoyable evening.
  • Orbital at the Eventim Apollo
    I’ve been meaning to get around to seeing Orbital for many years. This show was originally planned to be at the Brixton Academy but as that venue is currently closed, it was relocated to Hammersmith. To be honest, this evening was slightly hampered by the fact I don’t know as much of their work as I thought I did and it was all a bit samey. I ended up leaving before the encore.
  • Duran Duran (supported by Jake Shears) at the O2 Arena
    Continuing my quest to see all of the bands I was listening to in the 80s (and, simultaneously, ticking off the one visit to the O2 that I allow myself each year). I really enjoyed the nostalgia of seeing Duran Duran but, to be honest, I think I enjoyed Jake Shears more – and it was the Scissor Sisters I was listening to on the way home.
  • Hannah Peel and Beibei Wang at Kings Place
    Even in a year where I only see a few gigs, I still manage to see artists more than once. This was the second of Hannah Peel’s artist-in-residence shows. She appeared with Chinese percussionist Beibei Wang in a performance that was completely spontaneous and unrehearsed. Honestly, some parts were more successful than others, but it was certainly an interesting experience.
  • Songs from Summerisle at the Barbican Hall
    The Wicker Man is one of my favourite films, so I jumped at the chance to see the songs from the soundtrack performed live. But unfortunately, the evening was a massive disappointment. The band sounded like they had met just before the show and, while they all obviously knew the songs, they hadn’t rehearsed them together. Maybe they were going for a rustic feel – but, to me, it just sounded unprofessional.
  • Belle and Sebastian at the Roundhouse
    Another act that I try to see as often as possible. I know some people see Belle and Sebastian as the most Guardian-reader band ever – but I love them. This show saw them on top form.
  • Jon Anderson and the Paul Green Rock Academy at the Shepherds Bush Empire
    I’ve seen Yes play live a few times in the last ten years or so and, to be honest, it can sometimes be a bit over-serious and dull. In this show, Jon Anderson sang a load of old Yes songs with a group of teenagers from the Paul Green Rock Academy (the school that School of Rock was based on) and honestly, the teenagers brought such a feeling of fun to the occasion that it was probably the best Yes-related show that I’ve seen.
  • John Grant and Richard Hawley at the Barbican Hall
    Another repeated act – my second time seeing John Grant in a year. This was something different as he was playing a selection of Patsy Cline songs. I don’t listen to Patsy Cline much, but I knew a few more of the songs than I expected to. This was a bit lower-key than I was expecting.
  • Peter Hook and the Light at the Eventim Apollo
    I’ve been planning to see Peter Hook and the Light for a couple of years. There was a show I had tickets for in 2020, but it was postponed because of COVID and when it was rescheduled, I was unable to go, so I cancelled my ticket and got a refund. So I was pleased to get another chance. And this show had them playing both of the Substance albums (Joy Division and New Order). I know New Order still play some Joy Division songs in their sets, but this is probably the best chance I’ll have to see some deep Joy Division cuts played live. I really enjoyed this show.
  • Heaven 17 at the Shepherds Bush Empire
    It seems I see Heaven 17 live most years and they usually appear on my “best of” lists. This show was celebrating the fortieth anniversary of their album The Luxury Gap – so that got played in full, alongside many other Heaven 17 and Human League songs. A thoroughly enjoyable night.
  • The Imagined Village and Afro-Celt Sound System at the Roundhouse
    I’ve seen both The Imagined Village and the Afro-Celts live once before. And they were two of the best gigs I’ve ever seen. I pretty much assumed that the death of Simon Emmerson (who was an integral part of both bands) earlier in 2023 would mean that both bands would stop performing. But this show was a tribute to Emmerson and the bands both reformed to celebrate his work. This was probably my favourite gig of the year. That’s The Imagined Village (featuring two Carthys, dour Coppers and Billy Bragg) in the photo at the top of this post.

So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.

The post 2023 in Gigs appeared first on Davblog.

Ratio: The Simple Codes Behind the Craft of Everyday Cooking (1) (Ruhlman's Ratios)
author: Michael Ruhlman
name: David
average rating: 4.06
book published: 2009
rating: 0
read at:
date added: 2023/02/06
shelves: currently-reading
review:

Dave Cross posted a photo:

Goodbye Vivienne

via Instagram instagr.am/p/CmyT_MSNR3-/

Dave Cross posted a photo:

Low sun on Clapham Common this morning

via Instagram instagr.am/p/Cmv4y1eNiPn/

Dave Cross posted a photo:

There are about a dozen parakeets in this tree. I can hear them and (occasionally) see them

via Instagram instagr.am/p/Cmv4rUAta58/

Dave Cross posted a photo:

Sunrise on Clapham Common

via Instagram instagr.am/p/Cmq759NtKtE/

Dave Cross posted a photo:

Brixton Academy

via Instagram instagr.am/p/CmOfgfLtwL_/

S.

S.
author: J.J. Abrams
name: David
average rating: 3.86
book published: 2013
rating: 0
read at:
date added: 2022/01/16
shelves: currently-reading
review:

The Introvert Entrepreneur
author: Beth Buelow
name: David
average rating: 3.37
book published: 2015
rating: 0
read at:
date added: 2020/01/27
shelves: currently-reading
review:


Some thoughts on ways to measure the quality of Perl code (and, hence, get a basis for improving it)

How (and why) I spent 90 minutes writing a Twitterbot that tweeted the Apollo 11 mission timeline (shifted by 50 years)

A talk from the European Perl Conference 2019 (but not about Perl)
Prawn Cocktail Years
author: Lindsey Bareham
name: David
average rating: 4.50
book published: 1999
rating: 0
read at:
date added: 2019/07/29
shelves: currently-reading
review:


The slides from a half-day workshop on career development for programmers that I ran at The Perl Conference in Glasgow

A (not entirely serious) talk that I gave at the London Perl Mongers technical meeting in March 2018. It talks about how and why I build a web site listing the line of succession to the British throne back through history.
Dave Cross / Wednesday 04 June 2025 00:33