Powered by Perlanet
Watched on Monday June 23, 2025.
Watched on Saturday June 21, 2025.
Watched on Saturday June 21, 2025.
Watched on Saturday June 21, 2025.
Watched on Wednesday June 18, 2025.
Earlier this week, I read a post from someone who failed a job interview because they used a hash slice in some sample code and the interviewer didn’t believe it would work.
That’s not just wrong — it’s a teachable moment. Perl has several kinds of slices, and they’re all powerful tools for writing expressive, concise, idiomatic code. If you’re not familiar with them, you’re missing out on one of Perl’s secret superpowers.
In this post, I’ll walk through all the main types of slices in Perl — from the basics to the modern conveniences added in recent versions — using a consistent, real-world-ish example. Whether you’re new to slices or already slinging %hash{...}
like a pro, I hope you’ll find something useful here.
Let’s imagine you’re writing code to manage employees in a company. You’ve got an array of employee names and a hash of employee details.
my @employees = qw(alice bob carol dave eve); my %details = ( alice => 'Engineering', bob => 'Marketing', carol => 'HR', dave => 'Engineering', eve => 'Sales', );
We’ll use these throughout to demonstrate each kind of slice.
List slices are slices from a literal list. They let you pick multiple values from a list in a single operation:
my @subset = (qw(alice bob carol dave eve))[1, 3]; # @subset = ('bob', 'dave')
You can also destructure directly:
my ($employee1, $employee2) = (qw(alice bob carol))[0, 2]; # $employee1 = 'alice', $employee2 = 'carol'
Simple, readable, and no loop required.
Array slices are just like list slices, but from an array variable:
my @subset = @employees[0, 2, 4]; # @subset = ('alice', 'carol', 'eve')
You can also assign into an array slice to update multiple elements:
@employees[1, 3] = ('beatrice', 'daniel'); # @employees = ('alice', 'beatrice', 'carol', 'daniel', 'eve')
Handy for bulk updates without writing explicit loops.
This is where some people start to raise eyebrows — but hash slices are perfectly valid Perl and incredibly useful.
Let’s grab departments for a few employees:
my @departments = @details{'alice', 'carol', 'eve'}; # @departments = ('Engineering', 'HR', 'Sales')
The @
sigil here indicates that we’re asking for a list of values, even though %details
is a hash.
You can assign into a hash slice just as easily:
@details{'bob', 'carol'} = ('Support', 'Legal');
This kind of bulk update is especially useful when processing structured data or transforming API responses.
Starting in Perl 5.20, you can use %array[...]
to return index/value pairs — a very elegant way to extract and preserve positions in a single step.
my @indexed = %employees[1, 3]; # @indexed = (1 => 'bob', 3 => 'dave')
You get a flat list of index/value pairs. This is particularly helpful when mapping or reordering data based on array positions.
You can even delete from an array this way:
my @removed = delete %employees[0, 4]; # @removed = (0 => 'alice', 4 => 'eve')
And afterwards you’ll have this:
# @employees = (undef, 'bob', 'carol', 'dave', undef)
The final type of slice — also added in Perl 5.20 — is the %hash{...}
key/value slice. This returns a flat list of key/value pairs, perfect for passing to functions that expect key/value lists.
my @kv = %details{'alice', 'dave'}; # @kv = ('alice', 'Engineering', 'dave', 'Engineering')
You can construct a new hash from this easily:
my %engineering = (%details{'alice', 'dave'});
This avoids intermediate looping and makes your code clear and declarative.
Type | Syntax | Returns | Added in |
---|---|---|---|
List slice | (list)[@indices] |
Values | Ancient |
Array slice | @array[@indices] |
Values | Ancient |
Hash slice | @hash{@keys} |
Values | Ancient |
Index/value array slice | %array[@indices] |
Index-value pairs | Perl 5.20 |
Key/value hash slice | %hash{@keys} |
Key-value pairs | Perl 5.20 |
If someone tells you that @hash{...}
or %array[...]
doesn’t work — they’re either out of date or mistaken. These forms are standard, powerful, and idiomatic Perl.
Slices make your code cleaner, clearer, and more concise. They let you express what you want directly, without boilerplate. And yes — they’re perfectly interview-appropriate.
So next time you’re reaching for a loop to pluck a few values from a hash or an array, pause and ask: could this be a slice?
If the answer’s yes — go ahead and slice away.
The post A Slice of Perl first appeared on Perl Hacks.
Earlier this week, I read a post from someone who failed a job interview because they used a hash slice in some sample code and the interviewer didn’t believe it would work.
That’s not just wrong — it’s a teachable moment. Perl has several kinds of slices, and they’re all powerful tools for writing expressive, concise, idiomatic code. If you’re not familiar with them, you’re missing out on one of Perl’s secret superpowers.
In this post, I’ll walk through all the main types of slices in Perl — from the basics to the modern conveniences added in recent versions — using a consistent, real-world-ish example. Whether you’re new to slices or already slinging %hash{...}
like a pro, I hope you’ll find something useful here.
Let’s imagine you’re writing code to manage employees in a company. You’ve got an array of employee names and a hash of employee details.
my @employees = qw(alice bob carol dave eve);
my %details = (
alice => 'Engineering',
bob => 'Marketing',
carol => 'HR',
dave => 'Engineering',
eve => 'Sales',
);
We’ll use these throughout to demonstrate each kind of slice.
List slices are slices from a literal list. They let you pick multiple values from a list in a single operation:
my @subset = (qw(alice bob carol dave eve))[1, 3];
# @subset = ('bob', 'dave')
You can also destructure directly:
my ($employee1, $employee2) = (qw(alice bob carol))[0, 2];
# $employee1 = 'alice', $employee2 = 'carol'
Simple, readable, and no loop required.
Array slices are just like list slices, but from an array variable:
my @subset = @employees[0, 2, 4];
# @subset = ('alice', 'carol', 'eve')
You can also assign into an array slice to update multiple elements:
@employees[1, 3] = ('beatrice', 'daniel');
# @employees = ('alice', 'beatrice', 'carol', 'daniel', 'eve')
Handy for bulk updates without writing explicit loops.
This is where some people start to raise eyebrows — but hash slices are perfectly valid Perl and incredibly useful.
Let’s grab departments for a few employees:
my @departments = @details{'alice', 'carol', 'eve'};
# @departments = ('Engineering', 'HR', 'Sales')
The @
sigil here indicates that we’re asking for a list of values, even though %details
is a hash.
You can assign into a hash slice just as easily:
@details{'bob', 'carol'} = ('Support', 'Legal');
This kind of bulk update is especially useful when processing structured data or transforming API responses.
Starting in Perl 5.20 , you can use %array[...]
to return index/value pairs — a very elegant way to extract and preserve positions in a single step.
my @indexed = %employees[1, 3];
# @indexed = (1 => 'bob', 3 => 'dave')
You get a flat list of index/value pairs. This is particularly helpful when mapping or reordering data based on array positions.
You can even delete from an array this way:
my @removed = delete %employees[0, 4];
# @removed = (0 => 'alice', 4 => 'eve')
And afterwards you’ll have this:
# @employees = (undef, 'bob', 'carol', 'dave', undef)
The final type of slice — also added in Perl 5.20 — is the %hash{...}
key/value slice. This returns a flat list of key/value pairs, perfect for passing to functions that expect key/value lists.
my @kv = %details{'alice', 'dave'};
# @kv = ('alice', 'Engineering', 'dave', 'Engineering')
You can construct a new hash from this easily:
my %engineering = (%details{'alice', 'dave'});
This avoids intermediate looping and makes your code clear and declarative.
Type | Syntax | Returns | Added in |
---|---|---|---|
List slice | (list)[@indices] |
Values | Ancient |
Array slice | @array[@indices] |
Values | Ancient |
Hash slice | @hash{@keys} |
Values | Ancient |
Index/value array slice | %array[@indices] |
Index-value pairs | Perl 5.20 |
Key/value hash slice | %hash{@keys} |
Key-value pairs | Perl 5.20 |
If someone tells you that @hash{...}
or %array[...]
doesn’t work — they’re either out of date or mistaken. These forms are standard, powerful, and idiomatic Perl.
Slices make your code cleaner, clearer, and more concise. They let you express what you want directly, without boilerplate. And yes — they’re perfectly interview-appropriate.
So next time you’re reaching for a loop to pluck a few values from a hash or an array, pause and ask: could this be a slice?
If the answer’s yes — go ahead and slice away.
The post A Slice of Perl first appeared on Perl Hacks.
Back in January, I wrote a blog post about adding JSON-LD to your web pages to make it easier for Google to understand what they were about. The example I used was my ReadABooker site, which encourages people to read more Booker Prize shortlisted novels (and to do so by buying them using my Amazon Associate links).
I’m slightly sad to report that in the five months since I implemented that change, visits to the website have remained pretty much static and I have yet to make my fortune from Amazon kickbacks. But that’s ok, we just use it as an excuse to learn more about SEO and to apply more tweaks to the website.
I’ve been using the most excellent ARefs site to get information about how good the on-page SEO is for many of my sites. Every couple of weeks, ARefs crawls the site and will give me a list of suggestions of things I can improve. And for a long time, I had been putting off dealing with one of the biggest issues – because it seemed so difficult.
The site didn’t have enough text on it. You could get lists of Booker years, authors and books. And, eventually, you’d end up on a book page where, hopefully, you’d be tempted to buy a book. But the book pages were pretty bare – just the title, author, year they were short-listed and an image of the cover. Oh, and the all-important “Buy from Amazon” button. AHrefs was insistent that I needed more text (at least a hundred words) on a page in order for Google to take an interest in it. And given that my database of Booker books included hundreds of books by hundreds of authors, that seemed like a big job to take on.
But, a few days ago, I saw a solution to that problem – I could ask ChatGPT for the text.
I wrote a blog post in April about generating a daily-updating website using ChatGPT. This would be similar, but instead of writing the text directly to a Jekyll website, I’d write it to the database and add it to the templates that generate the website.
Adapting the code was very quick. Here’s the finished version for the book blurbs.
#!/usr/bin/env perl use strict; use warnings; use builtin qw[trim]; use feature 'say'; use OpenAPI::Client::OpenAI; use Time::Piece; use Encode qw[encode]; use Booker::Schema; my $sch = Booker::Schema->get_schema; my $count = 0; my $books = $sch->resultset('Book'); while ($count < 20 and my $book = $books->next) { next if defined $book->blurb; ++$count; my $blurb = describe_title($book); $book->update({ blurb => $blurb }); } sub describe_title { my ($book) = @_; my ($title, $author) = ($book->title, $book->author->name); my $debug = 1; my $api_key = $ENV{"OPENAI_API_KEY"} or die "OPENAI_API_KEY is not set\n"; my $client = OpenAPI::Client::OpenAI->new; my $prompt = join " ", 'Produce a 100-200 word description for the book', "'$title' by $author", 'Do not mention the fact that the book was short-listed for (or won)', 'the Booker Prize'; my $res = $client->createChatCompletion({ body => { model => 'gpt-4o', # model => 'gpt-4.1-nano', messages => [ { role => 'system', content => 'You are someone who knows a lot about popular literature.' }, { role => 'user', content => $prompt }, ], temperature => 1.0, }, }); my $text = $res->res->json->{choices}[0]{message}{content}; $text = encode('UTF-8', $text); say $text if $debug; return $text; }
There are a couple of points to note:
I then produced a similar program that did the same thing for authors. It’s similar enough that the next time I need something like this, I’ll spend some time turning it into a generic program.
I then added the new database fields to the book and author templates and re-published the site. You can see the results in, for example, the pages for Salman Rushie and Midnight’s Children.
I had one more slight concern going into this project. I pay for access to the ChatGPT API. I usually have about $10 in my pre-paid account and I really had no idea how much this was going to cost me. I needed have worried. Here’s a graph showing the bump in my API usage on the day I ran the code for all books and authors:
But you can also see that my total costs for the month so far are $0.01!
So, all-in-all, I call that a success and I’ll be using similar techniques to generate content for some other websites.
The post Generating Content with ChatGPT first appeared on Perl Hacks.
Back in January, I wrote a blog post about adding JSON-LD to your web pages to make it easier for Google to understand what they were about. The example I used was my ReadABooker site, which encourages people to read more Booker Prize shortlisted novels (and to do so by buying them using my Amazon Associate links).
I’m slightly sad to report that in the five months since I implemented that change, visits to the website have remained pretty much static and I have yet to make my fortune from Amazon kickbacks. But that’s ok, we just use it as an excuse to learn more about SEO and to apply more tweaks to the website.
I’ve been using the most excellent ARefs site to get information about how good the on-page SEO is for many of my sites. Every couple of weeks, ARefs crawls the site and will give me a list of suggestions of things I can improve. And for a long time, I had been putting off dealing with one of the biggest issues – because it seemed so difficult.
The site didn’t have enough text on it. You could get lists of Booker years, authors and books. And, eventually, you’d end up on a book page where, hopefully, you’d be tempted to buy a book. But the book pages were pretty bare – just the title, author, year they were short-listed and an image of the cover. Oh, and the all-important “Buy from Amazon” button. AHrefs was insistent that I needed more text (at least a hundred words) on a page in order for Google to take an interest in it. And given that my database of Booker books included hundreds of books by hundreds of authors, that seemed like a big job to take on.
But, a few days ago, I saw a solution to that problem – I could ask ChatGPT for the text.
I wrote a blog post in April about generating a daily-updating website using ChatGPT. This would be similar, but instead of writing the text directly to a Jekyll website, I’d write it to the database and add it to the templates that generate the website.
Adapting the code was very quick. Here’s the finished version for the book blurbs.
#!/usr/bin/env perl
use strict;
use warnings;
use builtin qw[trim];
use feature 'say';
use OpenAPI::Client::OpenAI;
use Time::Piece;
use Encode qw[encode];
use Booker::Schema;
my $sch = Booker::Schema->get_schema;
my $count = 0;
my $books = $sch->resultset('Book');
while ($count < 20 and my $book = $books->next) {
next if defined $book->blurb;
++$count;
my $blurb = describe_title($book);
$book->update({ blurb => $blurb });
}
sub describe_title {
my ($book) = @_;
my ($title, $author) = ($book->title, $book->author->name);
my $debug = 1;
my $api_key = $ENV{"OPENAI_API_KEY"} or die "OPENAI_API_KEY is not set\n";
my $client = OpenAPI::Client::OpenAI->new;
my $prompt = join " ",
'Produce a 100-200 word description for the book',
"'$title' by $author",
'Do not mention the fact that the book was short-listed for (or won)',
'the Booker Prize';
my $res = $client->createChatCompletion({
body => {
model => 'gpt-4o',
# model => 'gpt-4.1-nano',
messages => [
{ role => 'system', content => 'You are someone who knows a lot about popular literature.' },
{ role => 'user', content => $prompt },
],
temperature => 1.0,
},
});
my $text = $res->res->json->{choices}[0]{message}{content};
$text = encode('UTF-8', $text);
say $text if $debug;
return $text;
}
There are a couple of points to note:
I then produced a similar program that did the same thing for authors. It’s similar enough that the next time I need something like this, I’ll spend some time turning it into a generic program.
I then added the new database fields to the book and author templates and re-published the site. You can see the results in, for example, the pages for Salman Rushie and Midnight’s Children.
I had one more slight concern going into this project. I pay for access to the ChatGPT API. I usually have about $10 in my pre-paid account and I really had no idea how much this was going to cost me. I needed have worried. Here’s a graph showing the bump in my API usage on the day I ran the code for all books and authors:
But you can also see that my total costs for the month so far are $0.01!
So, all-in-all, I call that a success and I’ll be using similar techniques to generate content for some other websites.
The post Generating Content with ChatGPT first appeared on Perl Hacks.
Last summer, I wrote a couple of posts about my lightweight, roll-your-own approach to deploying PSGI (Dancer) web apps:
In those posts, I described how I avoided heavyweight deployment tools by writing a small, custom Perl script (app_service
) to start and manage them. It was minimal, transparent, and easy to replicate.
It also wasn’t great.
The system mostly worked, but it had a number of growing pains:
systemctl
.curl
, not journalctl
.As I started running more apps, this ad-hoc approach became harder to justify. It was time to grow up.
psgi-systemd-deploy
So today (with some help from ChatGPT) I wrote psgi-systemd-deploy — a simple, declarative deployment tool for PSGI apps that integrates directly with systemd
. It generates .service
files for your apps from environment-specific config and handles all the fiddly bits (paths, ports, logging, restart policies, etc.) with minimal fuss.
Key benefits:
.deploy.env
.env
file support for application-specific settingsenvsubst
systemd
units you can inspect and manage yourself--dry-run
mode so you can preview changes before deployingrun_all
helper script for managing all your deployed apps with one commandYou may know about my Line of Succession web site (introductory talk). This is one of the Dancer apps I’ve been talking about. To deploy it, I wrote a .deploy.env
file that looks like this:
WEBAPP_SERVICE_NAME=succession WEBAPP_DESC="British Line of Succession" WEBAPP_WORKDIR=/opt/succession WEBAPP_USER=succession WEBAPP_GROUP=psacln WEBAPP_PORT=2222 WEBAPP_WORKER_COUNT=5 WEBAPP_APP_PRELOAD=1
And optionally a .env
file for app-specific settings (e.g., database credentials). Then I run:
$ /path/to/psgi-systemd-deploy/deploy.sh
And that’s it. The app is now a first-class systemd
service, automatically started on boot and restartable with systemctl
.
run_all
Once you’ve deployed several PSGI apps using psgi-systemd-deploy
, you’ll probably want an easy way to manage them all at once. That’s where the run_all
script comes in.
It’s a simple but powerful wrapper around systemctl
that automatically discovers all deployed services by scanning for .deploy.env
files. That means no need to hard-code service names or paths — it just works, based on the configuration you’ve already provided.
Here’s how you might use it:
# Restart all PSGI apps $ run_all restart # Show current status $ run_all status # Stop them all (e.g., for maintenance) $ run_all stop
And if you want machine-readable output for scripting or monitoring, there’s a --json
flag:
$ run_all --json is-active | jq . [ { "service": "succession.service", "action": "is-active", "status": 0, "output": "active" }, { "service": "klortho.service", "action": "is-active", "status": 0, "output": "active" } ]
Under the hood, run_all
uses the same environment-driven model as the rest of the system — no surprises, no additional config files. It’s just a lightweight helper that understands your layout and automates the boring bits.
It’s not a replacement for systemctl
, but it makes common tasks across many services far more convenient — especially during development, deployment, or server reboots.
The goal of psgi-systemd-deploy
isn’t to replace Docker, K8s, or full-featured PaaS systems. It’s for the rest of us — folks running VPSes or bare-metal boxes where PSGI apps just need to run reliably and predictably under the OS’s own tools.
If you’ve been rolling your own init scripts, cron jobs, or nohup
-based hacks, give it a look. It’s clean, simple, and reliable — and a solid step up from duct tape.
The post Deploying Dancer Apps – The Next Generation first appeared on Perl Hacks.
Last summer, I wrote a couple of posts about my lightweight, roll-your-own approach to deploying PSGI (Dancer) web apps:
In those posts, I described how I avoided heavyweight deployment tools by writing a small, custom Perl script (app_service
) to start and manage them. It was minimal, transparent, and easy to replicate.
It also wasn’t great.
The system mostly worked, but it had a number of growing pains:
systemctl
.curl
, not journalctl
.As I started running more apps, this ad-hoc approach became harder to justify. It was time to grow up.
psgi-systemd-deploy
So today (with some help from ChatGPT) I wrote psgi-systemd-deploy — a simple, declarative deployment tool for PSGI apps that integrates directly with systemd
. It generates .service
files for your apps from environment-specific config and handles all the fiddly bits (paths, ports, logging, restart policies, etc.) with minimal fuss.
Key benefits:
.deploy.env
.env
file support for application-specific settingsenvsubst
systemd
units you can inspect and manage yourselfSafe — supports a --dry-run
mode so you can preview changes before deploying
Convenient — includes a run_all
helper script for managing all your deployed apps with one command
You may know about my Line of Succession web site (introductory talk). This is one of the Dancer apps I’ve been talking about. To deploy it, I wrote a .deploy.env
file that looks like this:
WEBAPP_SERVICE_NAME=succession
WEBAPP_DESC="British Line of Succession"
WEBAPP_WORKDIR=/opt/succession
WEBAPP_USER=succession
WEBAPP_GROUP=psacln
WEBAPP_PORT=2222
WEBAPP_WORKER_COUNT=5
WEBAPP_APP_PRELOAD=1
And optionally a .env
file for app-specific settings (e.g., database credentials). Then I run:
$ /path/to/psgi-systemd-deploy/deploy.sh
And that’s it. The app is now a first-class systemd
service, automatically started on boot and restartable with systemctl
.
run_all
Once you’ve deployed several PSGI apps using psgi-systemd-deploy
, you’ll probably want an easy way to manage them all at once. That’s where the run_all
script comes in.
It’s a simple but powerful wrapper around systemctl
that automatically discovers all deployed services by scanning for .deploy.env
files. That means no need to hard-code service names or paths — it just works, based on the configuration you’ve already provided.
Here’s how you might use it:
# Restart all PSGI apps
$ run_all restart
# Show current status
$ run_all status
# Stop them all (e.g., for maintenance)
$ run_all stop
And if you want machine-readable output for scripting or monitoring, there’s a --json
flag:
$ run_all --json is-active | jq .
[
{
"service": "succession.service",
"action": "is-active",
"status": 0,
"output": "active"
},
{
"service": "klortho.service",
"action": "is-active",
"status": 0,
"output": "active"
}
]
Under the hood, run_all
uses the same environment-driven model as the rest of the system — no surprises, no additional config files. It’s just a lightweight helper that understands your layout and automates the boring bits.
It’s not a replacement for systemctl
, but it makes common tasks across many services far more convenient — especially during development, deployment, or server reboots.
The goal of psgi-systemd-deploy
isn’t to replace Docker, K8s, or full-featured PaaS systems. It’s for the rest of us — folks running VPSes or bare-metal boxes where PSGI apps just need to run reliably and predictably under the OS’s own tools.
If you’ve been rolling your own init scripts, cron jobs, or nohup
-based hacks, give it a look. It’s clean, simple, and reliable — and a solid step up from duct tape.
The post Deploying Dancer Apps – The Next Generation first appeared on Perl Hacks.
Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.
That changed when I started using AI as a regular part of my development workflow.
Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.
It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.
Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/
directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB
environment variable so that Perl can find those modules.
But I’ve got a lot of Perl projects — often nested in folders like ~/git
, and sometimes with extra lib/
directories for testing or shared code. And I switch between them frequently. Typing:
export PERL5LIB=lib
…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.
What I wanted was this:
Every time I cd
into a directory, if there are any valid lib/
subdirectories beneath it, set PERL5LIB
automatically.
Only include lib/
dirs that actually contain .pm
files.
Skip junk like .vscode
, blib
, and old release folders like MyModule-1.23/
.
Don’t scan the entire world if I cd ~/git
, which contains hundreds of repos.
Show me what it’s doing, and let me test it in dry-run mode.
With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh
, and it:
Wraps cd()
to trigger a scan after every directory change
Finds all qualifying lib/
directories beneath the current directory
Filters them using simple rules:
Must contain .pm
files
Must not be under .vscode/
, .blib/
, or versioned build folders
Excludes specific top-level directories (like ~/git
) by default
Lets you configure everything via environment variables
Offers verbose
, dry-run
, and force
modes
Can append to or overwrite your existing PERL5LIB
You drop it in your ~/.bashrc
(or wherever you like), and your shell just becomes a little bit smarter.
source ~/bin/perl5lib_auto.sh cd ~/code/MyModule # => PERL5LIB set to: /home/user/code/MyModule/lib PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp # => [PERL5LIB] Found 2 eligible lib dir(s): # => /home/user/code/AnotherApp/lib # => /home/user/code/AnotherApp/t/lib # => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib
You can also set environment variables to customise behaviour:
export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy" export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib" export PERL5LIB_LIB_CAP=5 export PERL5LIB_APPEND=1
Or simulate what it would do:
PERL5LIB_DRYRUN=1 cd ~/code/BigProject
The full script is available on GitHub:
https://github.com/davorg/perl5lib_auto
I’d love to hear how you use it — or how you’d improve it. Feel free to:
Star the repo
Open issues for suggestions or bugs
Send pull requests with fixes, improvements, or completely new ideas
It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.
What useful little utilities have you written with help from an AI pair-programmer?
The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.
Like most developers, I have a mental folder labelled “useful little tools I’ll probably never build.” Small utilities, quality-of-life scripts, automations — they’d save time, but not enough to justify the overhead of building them. So they stay stuck in limbo.
That changed when I started using AI as a regular part of my development workflow.
Now, when I hit one of those recurring minor annoyances — something just frictiony enough to slow me down — I open a ChatGPT tab. Twenty minutes later, I usually have a working solution. Not always perfect, but almost always 90% of the way there. And once that initial burst of momentum is going, finishing it off is easy.
It’s not quite mind-reading. But it is like having a superpowered pair programmer on tap.
Obviously, I do a lot of Perl development. When working on a Perl project, it’s common to have one or more lib/
directories in the repo that contain the project’s modules. To run test scripts or local tools, I often need to set the PERL5LIB
environment variable so that Perl can find those modules.
But I’ve got a lot of Perl projects — often nested in folders like ~/git
, and sometimes with extra lib/
directories for testing or shared code. And I switch between them frequently. Typing:
export PERL5LIB=lib
…over and over gets boring fast. And worse, if you forget to do it, your test script breaks with a misleading “Can’t locate Foo/Bar.pm” error.
What I wanted was this:
Every time I cd
into a directory, if there are any valid lib/
subdirectories beneath it, set PERL5LIB
automatically.
Only include lib/
dirs that actually contain .pm
files.
Skip junk like .vscode
, blib
, and old release folders like MyModule-1.23/
.
Don’t scan the entire world if I cd ~/git
, which contains hundreds of repos.
Show me what it’s doing, and let me test it in dry-run mode.
With ChatGPT, I built a drop-in Bash function in about half an hour that does exactly that. It’s now saved as perl5lib_auto.sh
, and it:
Wraps cd()
to trigger a scan after every directory change
Finds all qualifying lib/
directories beneath the current directory
Filters them using simple rules:
Excludes specific top-level directories (like ~/git
) by default
Lets you configure everything via environment variables
Offers verbose
, dry-run
, and force
modes
Can append to or overwrite your existing PERL5LIB
You drop it in your ~/.bashrc
(or wherever you like), and your shell just becomes a little bit smarter.
source ~/bin/perl5lib_auto.sh
cd ~/code/MyModule
# => PERL5LIB set to: /home/user/code/MyModule/lib
PERL5LIB_VERBOSE=1 cd ~/code/AnotherApp
# => [PERL5LIB] Found 2 eligible lib dir(s):
# => /home/user/code/AnotherApp/lib
# => /home/user/code/AnotherApp/t/lib
# => PERL5LIB set to: /home/user/code/AnotherApp/lib:/home/user/code/AnotherApp/t/lib
You can also set environment variables to customise behaviour:
export PERL5LIB_EXCLUDE_DIRS="$HOME/git:$HOME/legacy"
export PERL5LIB_EXCLUDE_PATTERNS=".vscode:blib"
export PERL5LIB_LIB_CAP=5
export PERL5LIB_APPEND=1
Or simulate what it would do:
PERL5LIB_DRYRUN=1 cd ~/code/BigProject
The full script is available on GitHub:
I’d love to hear how you use it — or how you’d improve it. Feel free to:
⭐ Star the repo
🐛 Open issues for suggestions or bugs
🔀 Send pull requests with fixes, improvements, or completely new ideas
It’s a small tool, but it’s already saved me a surprising amount of friction. If you’re a Perl hacker who jumps between projects regularly, give it a try — and maybe give AI co-coding a try too while you’re at it.
What useful little utilities have you written with help from an AI pair-programmer?
The post Turning AI into a Developer Superpower: The PERL5LIB Auto-Setter first appeared on Perl Hacks.
You might know that I publish books about Perl at Perl School. What you might now know is that I also publish more general technical books at Clapham Technical Press. If you scroll down to the bottom of that page, you’ll see a list of the books that I’ve published. You’ll also see evidence of the problem I’ve been solving this morning.
Books tend to have covers that are in a portrait aspect ratio. But the template I’m using to display them requires images in a landscape aspect ratio. This is a common enough problem. And, of course, we’ve developed a common way of getting around it. You’ll see it on that page. We create a larger version of the image (large enough to fill the width of where the image is displayed), apply some level of Gaussian blur to the image and insert a new copy of the image over that. So we get our original image with a tastefully blurred background which echoes the colour of the image. ChatGPT tells me this is called a “Blurred Fill”.
So that’s all good. But as I’m publishing more books, I need to create these images on a pretty regular basis. And, of course, if I do something more than three or four times, I will want to automate.
A while ago, I wrote a simple program called “blur” that used Imager to apply the correct transformations to an image. But this morning, I decided I should really make that program a bit more useful. And release it to CPAN. So that’s what I’ve been doing.
Adjusting images to fit various aspect ratios without losing essential content or introducing unsightly borders is a frequent challenge. Manually creating a blurred background for each image is time-consuming and inefficient, especially when dealing with multiple images or integrating into automated workflows.
App::BlurFill is a Perl module and CLI tool designed to streamline the process of creating images with blurred backgrounds. It takes an input image and generates a new image where the original is centred over a blurred version of itself, adjusted to the specified dimensions.
Install via CPAN:
cpanm App::BlurFill
Then to use the CLI tool:
blurfill --width=800 --height=600 input.jpg
This command will generate input_blur.jpg
with the specified dimensions.
App::BlurFill also includes a web interface built with Dancer2. You can start the web server and send POST requests with an image file to receive the processed image in response.
Example using curl
:
curl -OJ -X POST http://localhost:5000/blur -F "image=@input.jpg"
The response will be the new image file, ready for use.
App::BlurFill is written in Perl 5.40, using the new perlclass feature. It makes use of the Imager
module for image processing tasks. Currently, it supports JPG, PNG and GIF.
Future enhancements may include:
App::Blurred aims to simplify the task of creating visually consistent images across various platforms and devices. Feedback and contributions are welcome to help improve its functionality and usability.
Please let me know if you find it useful or if there are extra features you would find useful.
Oh, and why not buy some Clapham Technical Press books!
Update: I forgot to include a link to the GitHub repository. It’s at https://github.com/davorg-cpan/app-blurfill
The post Reformating images with App::BlurFill first appeared on Perl Hacks.
You might know that I publish books about Perl at Perl School. What you might now know is that I also publish more general technical books at Clapham Technical Press. If you scroll down to the bottom of that page, you’ll see a list of the books that I’ve published. You’ll also see evidence of the problem I’ve been solving this morning.
Books tend to have covers that are in a portrait aspect ratio. But the template I’m using to display them requires images in a landscape aspect ratio. This is a common enough problem. And, of course, we’ve developed a common way of getting around it. You’ll see it on that page. We create a larger version of the image (large enough to fill the width of where the image is displayed), apply some level of Gaussian blur to the image and insert a new copy of the image over that. So we get our original image with a tastefully blurred background which echoes the colour of the image. ChatGPT tells me this is called a “Blurred Fill”.
So that’s all good. But as I’m publishing more books, I need to create these images on a pretty regular basis. And, of course, if I do something more than three or four times, I will want to automate.
A while ago, I wrote a simple program called “blur” that used Imager to apply the correct transformations to an image. But this morning, I decided I should really make that program a bit more useful. And release it to CPAN. So that’s what I’ve been doing.
Adjusting images to fit various aspect ratios without losing essential content or introducing unsightly borders is a frequent challenge. Manually creating a blurred background for each image is time-consuming and inefficient, especially when dealing with multiple images or integrating into automated workflows.
App::BlurFill is a Perl module and CLI tool designed to streamline the process of creating images with blurred backgrounds. It takes an input image and generates a new image where the original is centred over a blurred version of itself, adjusted to the specified dimensions.
Install via CPAN:
cpanm App::BlurFill
Then to use the CLI tool:
blurfill --width=800 --height=600 input.jpg
This command will generate input_blur.jpg
with the specified dimensions.
App::BlurFill also includes a web interface built with Dancer2. You can start the web server and send POST requests with an image file to receive the processed image in response.
Example using curl
:
curl -OJ -X POST http://localhost:5000/blur -F "image=@input.jpg"
The response will be the new image file, ready for use.
App::BlurFill is written in Perl 5.40, using the new perlclass feature. It makes use of the Imager
module for image processing tasks. Currently, it supports JPG, PNG and GIF.
Future enhancements may include:
App::Blurred aims to simplify the task of creating visually consistent images across various platforms and devices. Feedback and contributions are welcome to help improve its functionality and usability.
Please let me know if you find it useful or if there are extra features you would find useful.
Oh, and why not buy some Clapham Technical Press books!
Update: I forgot to include a link to the GitHub repository. It's at https://github.com/davorg-cpan/app-blurfill
The post Reformating images with App::BlurFill first appeared on Perl Hacks.
A few days ago, I looked at an unused domain I owned — balham.org — and thought: “There must be a way to make this useful… and maybe even make it pay for itself.”
So I set myself a challenge: one day to build something genuinely useful. A site that served a real audience (people in and around Balham), that was fun to build, and maybe could be turned into a small revenue stream.
It was also a great excuse to get properly stuck into Jekyll and the Minimal Mistakes theme — both of which I’d dabbled with before, but never used in anger. And, crucially, I wasn’t working alone: I had ChatGPT as a development assistant, sounding board, researcher, and occasional bug-hunter.
Balham is a reasonably affluent, busy part of south west London. It’s full of restaurants, cafés, gyms, independent shops, and people looking for things to do. It also has a surprisingly rich local history — from Victorian grandeur to Blitz-era tragedy.
I figured the site could be structured around three main pillars:
Throw in a curated homepage and maybe a blog later, and I had the bones of a useful site. The kind of thing that people would find via Google or get sent a link to by a friend.
I wanted something static, fast, and easy to deploy. My toolchain ended up being:
The site is 100% static, with no backend, no databases, no CMS. It builds automatically on GitHub push, and is entirely hosted via GitHub Pages.
I gave us about six solid hours to build something real. Here’s what we did (“we” meaning me + ChatGPT):
The domain was already pointed at GitHub Pages, and I had a basic “Hello World” site in place. We cleared that out, set up a fresh Jekyll repo, and added a _config.yml that pointed at the Minimal Mistakes remote theme. No cloning or submodules.
We decided to create four main pages:
We used the layout: single layout provided by Minimal Mistakes, and created custom permalinks so URLs were clean and extension-free.
This was built from scratch using a YAML data file (_data/businesses.yml). ChatGPT gathered an initial list of 20 local businesses (restaurants, shops, pubs, etc.), checked their status, and added details like name, category, address, website, and a short description.
In the template, we looped over the list, rendered sections with conditional logic (e.g., don’t output the website link if it’s empty), and added anchor IDs to each entry so we could link to them directly from the homepage.
Built exactly the same way, but using _data/events.yml. To keep things realistic, we seeded a small number of example events and included a note inviting people to email us with new submissions.
We wanted the homepage to show a curated set of businesses and events. So we created a third data file, _data/featured.yml, which just listed the names of the featured entries. Then in the homepage template, we used where and slugify to match names and pull in the full record from businesses.yml or events.yml. Super DRY.
We added a map of Balham as a hero image, styled responsively. Later we created a .responsive-inline-image class to embed supporting images on the history page without overwhelming the layout.
This turned out to be one of the most satisfying parts. We wrote five paragraphs covering key moments in Balham’s development — Victorian expansion, Du Cane Court, The Priory, the Blitz, and modern growth.
Then we sourced five CC-licensed or public domain images (from Wikimedia Commons and Geograph) to match each paragraph. Each was wrapped in a <figure> with proper attribution and a consistent CSS class. The result feels polished and informative.
We went through all the basics:
We added GA4 tracking using Minimal Mistakes’ built-in support, and verified the domain with Google Search Console. A sitemap was submitted, and indexing kicked in within minutes.
We ran Lighthouse and WAVE tests. Accessibility came out at 100%. Performance dipped slightly due to Google Fonts and image size, but we did our best to optimise without sacrificing aesthetics.
We added a site-wide footer call-to-action inviting people to email us with suggestions for businesses or events. This makes the site feel alive and participatory, even without a backend form.
This started as a fun experiment: could I monetise an unused domain and finally learn Jekyll properly?
What I ended up with is a genuinely useful local resource — one that looks good, loads quickly, and has room to grow.
If you’re sitting on an unused domain, and you’ve got a free day and a chatbot at your side — you might be surprised what you can build.
Oh, and one final thing — obviously you can also get ChatGPT to write a blog post talking about the project :-)
Originally published at https://blog.dave.org.uk on March 23, 2025.
A few days ago, I looked at an unused domain I owned — balham.org — and thought: “There must be a way to make this useful… and maybe even make it pay for itself.”
So I set myself a challenge: one day to build something genuinely useful. A site that served a real audience (people in and around Balham), that was fun to build, and maybe could be turned into a small revenue stream.
It was also a great excuse to get properly stuck into Jekyll and the Minimal Mistakes theme — both of which I’d dabbled with before, but never used in anger. And, crucially, I wasn’t working alone: I had ChatGPT as a development assistant, sounding board, researcher, and occasional bug-hunter.
Balham is a reasonably affluent, busy part of south west London. It’s full of restaurants, cafés, gyms, independent shops, and people looking for things to do. It also has a surprisingly rich local history — from Victorian grandeur to Blitz-era tragedy.
I figured the site could be structured around three main pillars:
Throw in a curated homepage and maybe a blog later, and I had the bones of a useful site. The kind of thing that people would find via Google or get sent a link to by a friend.
I wanted something static, fast, and easy to deploy. My toolchain ended up being:
The site is 100% static, with no backend, no databases, no CMS. It builds automatically on GitHub push, and is entirely hosted via GitHub Pages.
I gave us about six solid hours to build something real. Here’s what we did (“we” meaning me + ChatGPT):
The domain was already pointed at GitHub Pages, and I had a basic “Hello World” site in place. We cleared that out, set up a fresh Jekyll repo, and added a _config.yml
that pointed at the Minimal Mistakes remote theme. No cloning or submodules.
We decided to create four main pages:
index.md
)directory/index.md
)events/index.md
)history/index.md
)We used the layout: single
layout provided by Minimal Mistakes, and created custom permalinks so URLs were clean and extension-free.
This was built from scratch using a YAML data file (_data/businesses.yml
). ChatGPT gathered an initial list of 20 local businesses (restaurants, shops, pubs, etc.), checked their status, and added details like name, category, address, website, and a short description.
In the template, we looped over the list, rendered sections with conditional logic (e.g., don’t output the website link if it’s empty), and added anchor IDs to each entry so we could link to them directly from the homepage.
Built exactly the same way, but using _data/events.yml
. To keep things realistic, we seeded a small number of example events and included a note inviting people to email us with new submissions.
We wanted the homepage to show a curated set of businesses and events. So we created a third data file, _data/featured.yml
, which just listed the names of the featured entries. Then in the homepage template, we used where
and slugify
to match names and pull in the full record from businesses.yml
or events.yml
. Super DRY.
We added a map of Balham as a hero image, styled responsively. Later we created a .responsive-inline-image
class to embed supporting images on the history page without overwhelming the layout.
This turned out to be one of the most satisfying parts. We wrote five paragraphs covering key moments in Balham’s development — Victorian expansion, Du Cane Court, The Priory, the Blitz, and modern growth.
Then we sourced five CC-licensed or public domain images (from Wikimedia Commons and Geograph) to match each paragraph. Each was wrapped in a <figure>
with proper attribution and a consistent CSS class. The result feels polished and informative.
We went through all the basics:
title
and description
in front matter for each pagerobots.txt
, sitemap.xml
, and a hand-crafted humans.txt
.html
extensionsWe added GA4 tracking using Minimal Mistakes’ built-in support, and verified the domain with Google Search Console. A sitemap was submitted, and indexing kicked in within minutes.
We ran Lighthouse and WAVE tests. Accessibility came out at 100%. Performance dipped slightly due to Google Fonts and image size, but we did our best to optimise without sacrificing aesthetics.
We added a site-wide footer call-to-action inviting people to email us with suggestions for businesses or events. This makes the site feel alive and participatory, even without a backend form.
This started as a fun experiment: could I monetise an unused domain and finally learn Jekyll properly?
What I ended up with is a genuinely useful local resource — one that looks good, loads quickly, and has room to grow.
If you’re sitting on an unused domain, and you’ve got a free day and a chatbot at your side — you might be surprised what you can build.
Oh, and one final thing – obviously you can also get ChatGPT to write a blog post talking about the project :-)
The post Building a website in a day — with help from ChatGPT appeared first on Davblog.
I built and launched a new website yesterday. It wasn’t what I planned to do, but the idea popped into my head while I was drinking my morning coffee on Clapham Common and it seemed to be the kind of thing I could complete in a day — so I decided to put my original plans on hold and built it instead.
The website is aimed at small business owners who think they need a website (or want to update their existing one) but who know next to nothing about web development and can easily fall prey to the many cowboy website companies that seem to dominate the “making websites for small companies” section of our industries. The site is structured around a number of questions you can ask a potential website builder to try and weed out the dodgier elements.
I’m not really in that sector of our industry. But while writing the content for that site, it occurred to me that some people might be interested in the tools I use to build sites like this.
I generally build websites about topics that I’m interested in and, therefore, know a fair bit about. But I probably don’t know everything about these subjects. So I’ll certainly brainstorm some ideas with ChatGPT. And, once I’ve written something, I’ll usually run it through ChatGPT again to proofread it. I consider myself a pretty good writer, but it’s embarrassing how often ChatGPT catches obvious errors.
I’ve used DALL-E (via ChatGPT) for a lot of image generation. This weekend, I subscribed to Midjourney because I heard it was better at generating images that include text. So far, that seems to be accurate.
I don’t write much raw HTML these days. I’ll generally write in Markdown and use a static site generator to turn that into a real website. This weekend I took the easy route and used Jekyll with the Minimal Mistakes theme. Honestly, I don’t love Jekyll, but it integrates well with GitHub Pages and I can usually get it to do what I want — with a combination of help from ChatGPT and reading the source code. I’m (slowly) building my own Static Site Generator ( Aphra) in Perl. But, to be honest, I find that when I use it I can easily get distracted by adding new features rather than getting the site built.
As I’ve hinted at, if I’m building a static site (and, it’s surprising how often that’s the case), it will be hosted on GitHub Pages. It’s not really aimed at end-users, but I know to you use it pretty well now. This weekend, I used the default mechanism that regenerates the site (using Jekyll) on every commit. But if I’m using Aphra or a custom site generator, I know I can use GitHub Actions to build and deploy the site.
If I’m writing actual HTML, then I’m old-skool enough to still use Bootstrap for CSS. There’s probably something better out there now, but I haven’t tried to work out what it is (feel free to let me know in the comments).
For a long while, I used jQuery to add Javascript to my pages — until someone was kind enough to tell me that vanilla Javascript had mostly caught up and jQuery was no longer necessary. I understand Javascript. And with help from GitHub Copilot, I can usually get it doing what I want pretty quickly.
Many years ago, I spent a couple of years working in the SEO group at Zoopla. So, now, I can’t think about building a website without considering SEO.
I quickly lose interest in the content side of SEO. Figuring out what my keywords are and making sure they’re scattered through the content at the correct frequency, feels like it stifles my writing (maybe that’s an area where ChatGPT can help) but I enjoy Technical SEO. So I like to make sure that all of my pages contain the correct structured data (usually JSON-LD). I also like to ensure my sites all have useful OpenGraph headers. This isn’t really SEO, I guess, but these headers control what people see when they share content on social media. So by making that as attractive as possible (a useful title and description, an attractive image) it encourages more sharing, which increases your site’s visibility and, in around about way, improves SEO.
I like to register all of my sites with Ahrefs — they will crawl my sites periodically and send me a long list of SEO improvements I can make.
I add Google Analytics to all of my sites. That’s still the best way to find out how popular your site it and where your traffic is coming from. I used to be quite proficient with Universal Analytics, but I must admit I haven’t fully got the hang of Google Analytics 4 yet-so I’m probably only scratching the surface of what it can do.
I also register all of my sites with Google Search Console. That shows me information about how my site appears in the Google Search Index. I also link that to Google Analytics — so GA also knows what searches brought people to my sites.
I think that covers everything-though I’ve probably forgotten something. It might sound like a lot, but once you get into a rhythm, adding these extra touches doesn’t take long. And the additional insights you gain make it well worth the effort.
If you’ve built a website recently, I’d love to hear about your approach. What tools and techniques do you swear by? Are there any must-have features or best practices I’ve overlooked? Drop a comment below or get in touch-I’m always keen to learn new tricks and refine my process. And if you’re a small business owner looking for guidance on choosing a web developer, check out my new site-it might just save you from a costly mistake!
Originally published at https://blog.dave.org.uk on March 16, 2025.
I built and launched a new website yesterday. It wasn’t what I planned to do, but the idea popped into my head while I was drinking my morning coffee on Clapham Common and it seemed to be the kind of thing I could complete in a day – so I decided to put my original plans on hold and built it instead.
The website is aimed at small business owners who think they need a website (or want to update their existing one) but who know next to nothing about web development and can easily fall prey to the many cowboy website companies that seem to dominate the “making websites for small companies” section of our industries. The site is structured around a number of questions you can ask a potential website builder to try and weed out the dodgier elements.
I’m not really in that sector of our industry. But while writing the content for that site, it occurred to me that some people might be interested in the tools I use to build sites like this.
I generally build websites about topics that I’m interested in and, therefore, know a fair bit about. But I probably don’t know everything about these subjects. So I’ll certainly brainstorm some ideas with ChatGPT. And, once I’ve written something, I’ll usually run it through ChatGPT again to proofread it. I consider myself a pretty good writer, but it’s embarrassing how often ChatGPT catches obvious errors.
I’ve used DALL-E (via ChatGPT) for a lot of image generation. This weekend, I subscribed to Midjourney because I heard it was better at generating images that include text. So far, that seems to be accurate.
I don’t write much raw HTML these days. I’ll generally write in Markdown and use a static site generator to turn that into a real website. This weekend I took the easy route and used Jekyll with the Minimal Mistakes theme. Honestly, I don’t love Jekyll, but it integrates well with GitHub Pages and I can usually get it to do what I want – with a combination of help from ChatGPT and reading the source code. I’m (slowly) building my own Static Site Generator (Aphra) in Perl. But, to be honest, I find that when I use it I can easily get distracted by adding new features rather than getting the site built.
As I’ve hinted at, if I’m building a static site (and, it’s surprising how often that’s the case), it will be hosted on GitHub Pages. It’s not really aimed at end-users, but I know how to use it pretty well now. This weekend, I used the default mechanism that regenerates the site (using Jekyll) on every commit. But if I’m using Aphra or a custom site generator, I know I can use GitHub Actions to build and deploy the site.
If I’m writing actual HTML, then I’m old-skool enough to still use Bootstrap for CSS. There’s probably something better out there now, but I haven’t tried to work out what it is (feel free to let me know in the comments).
For a long while, I used jQuery to add Javascript to my pages – until someone was kind enough to tell me that vanilla Javascript had mostly caught up and jQuery was no longer necessary. I understand Javascript. And with help from GitHub Copilot, I can usually get it doing what I want pretty quickly.
Many years ago, I spent a couple of years working in the SEO group at Zoopla. So, now, I can’t think about building a website without considering SEO.
I quickly lose interest in the content side of SEO. Figuring out what my keywords are and making sure they’re scattered through the content at the correct frequency, feels like it stifles my writing (maybe that’s an area where ChatGPT can help) but I enjoy Technical SEO. So I like to make sure that all of my pages contain the correct structured data (usually JSON-LD). I also like to ensure my sites all have useful OpenGraph headers. This isn’t really SEO, I guess, but these headers control what people see when they share content on social media. So by making that as attractive as possible (a useful title and description, an attractive image) it encourages more sharing, which increases your site’s visibility and, in around about way, improves SEO.
I like to register all of my sites with Ahrefs – they will crawl my sites periodically and send me a long list of SEO improvements I can make.
I add Google Analytics to all of my sites. That’s still the best way to find out how popular your site it and where your traffic is coming from. I used to be quite proficient with Universal Analytics, but I must admit I haven’t fully got the hang of Google Analytics 4 yet—so I’m probably only scratching the surface of what it can do.
I also register all of my sites with Google Search Console. That shows me information about how my site appears in the Google Search Index. I also link that to Google Analytics – so GA also knows what searches brought people to my sites.
I think that covers everything—though I’ve probably forgotten something. It might sound like a lot, but once you get into a rhythm, adding these extra touches doesn’t take long. And the additional insights you gain make it well worth the effort.
If you’ve built a website recently, I’d love to hear about your approach. What tools and techniques do you swear by? Are there any must-have features or best practices I’ve overlooked? Drop a comment below or get in touch—I’m always keen to learn new tricks and refine my process. And if you’re a small business owner looking for guidance on choosing a web developer, check out my new site—it might just save you from a costly mistake!
The post How I build websites in 2025 appeared first on Davblog.
I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five…
I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five free tickets and discounts on your tickets and snacks. I’ve often wondered whether it’s worth paying for, but in the last couple of years, they’ve added an extra feature that makes it well worth the cost. It’s called Film Club and every week they have two curated screenings that members can see for just £1. On Sunday lunchtime, there’s a screening of an older film, and on a weekday evening (usually Wednesday at the Clapham Picturehouse), they show something new. I’ve got into the habit of seeing most of these screenings.
For most of the year, I’ve been considering a monthly post about the films I’ve seen at Film Club, but I’ve never got around to it. So, instead, you get an end-of-year dump of the almost eighty films I’ve seen.
The post Picturehouse Film Club appeared first on Davblog.
Royal titles in the United Kingdom carry a rich tapestry of history, embodying centuries of tradition while adapting to the changing landscape of the modern world. This article delves into the structure of these titles, focusing on significant changes made during the 20th and 21st centuries, and how these rules affect current royals.
The framework for today’s royal titles was significantly shaped by the Letters Patent issued by King George V in 1917. This document was pivotal in redefining who in the royal family would be styled with “His or Her Royal Highness” (HRH) and as a prince or princess. Specifically, the 1917 Letters Patent restricted these styles to:
This move was partly in response to the anti-German sentiment of World War I, aiming to streamline the monarchy and solidify its British identity by reducing the number of royals with German titles.
Notice that the definitions talk about “a sovereign”, not “the sovereign”. This means that when the sovereign changes, no-one will lose their royal title (for example, Prince Andrew is still the son of a sovereign, even though he is no longer the son of the sovereign). However, people can gain royal titles when the sovereign changes — we will see examples below.
Understanding the implications of the existing rules as his family grew, King George VI issued a new Letters Patent in 1948 to extend the style of HRH and prince/princess to the children of the future queen, Princess Elizabeth (later Queen Elizabeth II). This was crucial as, without this adjustment, Princess Elizabeth’s children would not automatically have become princes or princesses because they were not male-line grandchildren of the monarch. This ensured that Charles and Anne were born with princely status, despite being the female-line grandchildren of a monarch.
Queen Elizabeth II’s update to the royal titles in 2012 before the birth of Prince William’s children was another significant modification. The Letters Patent of 2012 decreed that all the children of the eldest son of the Prince of Wales would hold the title of HRH and be styled as prince or princess, not just the eldest son. This move was in anticipation of changes brought about by the Succession to the Crown Act of 2013, which ended the system of male primogeniture, ensuring that the firstborn child of the Prince of Wales, regardless of gender, would be the direct heir to the throne. Without this change, there could have been a situation where Prince William’s first child (and the heir to the throne) was a daughter who wasn’t a princess, whereas her eldest (but younger) brother would have been a prince.
As the royal family branches out, descendants become too distanced from the throne, removing their entitlement to HRH and princely status. For example, the Duke of Gloucester, Duke of Kent, Prince Michael of Kent and Princess Alexandra all have princely status as male-line grandchildren of George V. Their children are all great-grandchildren of a monarch and, therefore, do not all have royal styles or titles. This reflects a natural trimming of the royal family tree, focusing the monarchy’s public role on those directly in line for succession.
The evolution of British royal titles reflects both adherence to deep-rooted traditions and responsiveness to modern expectations. These titles not only delineate the structure and hierarchy within the royal family but also adapt to changes in societal norms and the legal landscape, ensuring the British monarchy remains both respected and relevant in the contemporary era.
Originally published at https://blog.lineofsuccession.co.uk on April 25, 2024.
Royal Titles Decoded: What Makes a Prince or Princess? — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
Originally published at https://blog.dave.org.uk on April 7, 2024.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
The post The Tourist appeared first on Davblog.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
The post 2023 in Gigs appeared first on Davblog.