Powered by Perlanet
f3ec1ca
🟩 Aphra is up (200 in 225 ms) [skip ci] [upptime]
bf8c07d
🟩 Klortho is up (200 in 654 ms) [skip ci] [upptime]
Watched on Thursday February 20, 2025.
Watched on Tuesday February 18, 2025.
Watched on Monday February 17, 2025.
Watched on Sunday February 16, 2025.
Watched on Wednesday February 12, 2025.
At the end of my last post, we had a structure in place that used GitHub Actions to run a workflow every time a change was committed to the PPC repository. That workflow would rebuild the website and publish it on GitHub Pages.
All that was left for us to do was to write the middle bit – the part that actually takes the contents of the repo and creates the website. This involves writing some Perl.
There are three types of pages that we want to create:
I’ll be using the Template Toolkit to build the site, with a sprinkling of Bootstrap to make it look half-decent. Because there is a lot of Markdown-to-HTML conversion, I’ll use my Template::Provider::Pandoc module which uses Pandoc to convert templates into different formats.
The first thing I did was parse the PPCs themselves, extracting the relevant information. Luckily, each PPC has a “preamble” section containing most of the data we need. I created a basic class to model PPCs which included a really hacky parser to extract this information and create a object of the class.
This class abstracts away a lot of the complexity which means the program that actually builds the site is less than eighty lines of code. Let’s look at it in a bit more detail:
#!/usr/bin/perl use v5.38; use JSON; use File::Copy; use Template; use Template::Provider::Pandoc; use PPC;
There’s nothing unusual in the first few lines. We’re just loading the modules we’re using. Note that use v5.38 automatically enables strict and warnings, so we don’t need to load them explicitly.
my @ppcs; my $outpath = './web'; my $template_path = [ './ppcs', './docs', './in', './ttlib' ];
Here, we’re just setting up some useful variables. @ppcs will contain the PPC objects that we create. One potential clean-up here is to reduce the size of that list of input directories.
my $base = shift || $outpath; $base =~ s/^\.//; $base = "/$base" if $base !~ m|^/|; $base = "$base/" if $base !~ m|/$|;
This is a slightly messy hack that is used to set a <base> tag in the HTML.
my $provider = Template::Provider::Pandoc->new({ INCLUDE_PATH => $template_path, }); my $tt = Template->new({ LOAD_TEMPLATES => [ $provider ], INCLUDE_PATH => $template_path, OUTPUT_PATH => $outpath, RELATIVE => 1, WRAPPER => 'page.tt', VARIABLES => { base => $base, } });
Here, we’re setting up our Template Toolkit processor. Some of you may not be familiar with using a Template provider module. These modules change how TT retrieves templates: if the template has an .md
extension, then the text is passed though Pandoc to convert it from Markdown to HTML before it’s handed to the template processor. It’s slightly annoying that we need to pass the template include path to both the provider and the main template engine.
for (<ppcs/*.md>) { my $ppc = PPC->new_from_file($_); push @ppcs, $ppc; $tt->process($ppc->in_path, {}, $ppc->out_path) or warn $tt->error; }
This is where we process the actual PPCs. For each PPC we find in the /ppcs
directory, we create a PPC object, store that in the @ppcs
variable and process the PPC document as a template – converting it from Markdown to HTML and writing it to the /web
directory.
my $vars = { ppcs => \@ppcs, }; $tt->process('index.tt', $vars, 'index.html') or die $tt->error;
Here’s where we process the index.tt
file to generate the index.html
for our site. Most of the template is made up of a loop over the @ppcs
variable to create a table of the PPCs.
for (<docs/*.md>) { s|^docs/||; my $out = s|\.md|/index.html|r; $tt->process($_, {}, $out) or die $tt->error; }
There are a few other documents in the /docs
directory describing the PPC process. So in this step, we iterate across the Markdown files in that directory and convert each of them into HTML. Unfortunately, one of them is the template.md
which is intended to be used as the template for new PPCs – so it would be handy if that one wasn’t converted to HTML. That’s something to think about in the future.
mkdir 'web/images'; for (<images/*>) { copy $_, "web/$_"; } if (-f 'in/style.css') { copy 'in/style.css', 'web/style.css'; } if (-f 'CNAME') { copy 'CNAME', "web/CNAME"; }
We’re on the home straight now. And this section is a bit scrappy. You might recall from the last post that we’re building the website in the /web
directory. And there are a few other files that need to be copied into that directory in order that they are then deployed to the web server. So we just copy files. You might not know what a CNAME
file is – it’s the file that GitHub Pages uses to tell their web server that you’re serving your website from a custom domain name.
my $json = JSON->new->pretty->canonical->encode([ map { $_->as_data } @ppcs ]); open my $json_fh, '>', 'web/ppcs.json' or die $!; print $json_fh $json;
And, finally, we generate a JSON version of our PPCs and write that file to the /web
directory. No-one asked for this, but I thought someone might find this data useful. If you use this for something interesting, I’d love to hear about it.
A few other bits and pieces to be aware of.
/docs
. It would make sense to change that so it’s generated from the contents of that directoryBut there you are. That’s the system that I knocked together in a few hours a couple of weeks ago. As I mentioned in the last post, the idea was to make the PPC process more transparent to the Perl community outside of the Perl 5 Porters and the Perl Steering Council. I hope it achieves that and, further, I hope it does so in a way that keeps out of people’s way. As soon as someone updates one of the documents in the repository, the workflow will kick in and publish a new version of the website. There are a few grungy corners of the code and there are certainly some improvements that can be made. I’m hoping that once the pull request is merged, people will start proposing new pull requests to add new features.
The post Proposed Perl Changes (part 2) first appeared on Perl Hacks.
At the end of my last post, we had a structure in place that used GitHub Actions to run a workflow every time a change was committed to the PPC repository. That workflow would rebuild the website and publish it on GitHub Pages.
All that was left for us to do was to write the middle bit – the part that actually takes the contents of the repo and creates the website. This involves writing some Perl.
There are three types of pages that we want to create:
I’ll be using the Template Toolkit to build the site, with a sprinkling of Bootstrap to make it look half-decent. Because there is a lot of Markdown-to-HTML conversion, I’ll use my Template::Provider::Pandoc module which uses Pandoc to convert templates into different formats.
The first thing I did was parse the PPCs themselves, extracting the relevant information. Luckily, each PPC has a “preamble” section containing most of the data we need. I created a basic class to model PPCs which included a really hacky parser to extract this information and create a object of the class.
This class abstracts away a lot of the complexity which means the program that actually builds the site is less than eighty lines of code. Let’s look at it in a bit more detail:
#!/usr/bin/perl
use v5.38;
use JSON;
use File::Copy;
use Template;
use Template::Provider::Pandoc;
use PPC;
There’s nothing unusual in the first few lines. We’re just loading the modules we’re using. Note that use v5.38 automatically enables strict and warnings, so we don’t need to load them explicitly.
my @ppcs;
my $outpath = './web';
my $template_path = ['./ppcs', './docs', './in', './ttlib'];
Here, we’re just setting up some useful variables. @ppcs will contain the PPC objects that we create. One potential clean-up here is to reduce the size of that list of input directories.
my $base = shift || $outpath;
$base =~ s/^\.//;
$base = "/$base" if $base !~ m|^/|;
$base = "$base/" if $base !~ m|/$|;
This is a slightly messy hack that is used to set a <base> tag in the HTML.
my $provider = Template::Provider::Pandoc->new({
INCLUDE_PATH => $template_path,
});
my $tt = Template->new({
LOAD_TEMPLATES => [$provider],
INCLUDE_PATH => $template_path,
OUTPUT_PATH => $outpath,
RELATIVE => 1,
WRAPPER => 'page.tt',
VARIABLES => {
base => $base,
}
});
Here, we’re setting up our Template Toolkit processor. Some of you may not be familiar with using a Template provider module. These modules change how TT retrieves templates: if the template has an .md
extension, then the text is passed though Pandoc to convert it from Markdown to HTML before it’s handed to the template processor. It’s slightly annoying that we need to pass the template include path to both the provider and the main template engine.
for (<ppcs/*.md>) {
my $ppc = PPC->new_from_file($_);
push @ppcs, $ppc;
$tt->process($ppc->in_path, {}, $ppc->out_path)
or warn $tt->error;
}
This is where we process the actual PPCs. For each PPC we find in the /ppcs
directory, we create a PPC object, store that in the @ppcs
variable and process the PPC document as a template – converting it from Markdown to HTML and writing it to the /web
directory.
my $vars = {
ppcs => \@ppcs,
};
$tt->process('index.tt', $vars, 'index.html')
or die $tt->error;
Here’s where we process the index.tt
file to generate the index.html
for our site. Most of the template is made up of a loop over the @ppcs
variable to create a table of the PPCs.
for (<docs/*.md>) {
s|^docs/||;
my $out = s|\.md|/index.html|r;
$tt->process($_, {}, $out)
or die $tt->error;
}
There are a few other documents in the /docs
directory describing the PPC process. So in this step, we iterate across the Markdown files in that directory and convert each of them into HTML. Unfortunately, one of them is the template.md
which is intended to be used as the template for new PPCs – so it would be handy if that one wasn’t converted to HTML. That’s something to think about in the future.
mkdir 'web/images';
for (<images/*>) {
copy $_, "web/$_";
}
if (-f 'in/style.css') {
copy 'in/style.css', 'web/style.css';
}
if (-f 'CNAME') {
copy 'CNAME', "web/CNAME";
}
We’re on the home straight now. And this section is a bit scrappy. You might recall from the last post that we’re building the website in the /web
directory. And there are a few other files that need to be copied into that directory in order that they are then deployed to the web server. So we just copy files. You might not know what a CNAME
file is – it’s the file that GitHub Pages uses to tell their web server that you’re serving your website from a custom domain name.
my $json = JSON->new->pretty->canonical->encode([
map { $_->as_data } @ppcs
]);
open my $json_fh, '>', 'web/ppcs.json' or die $!;
print $json_fh $json;
And, finally, we generate a JSON version of our PPCs and write that file to the /web
directory. No-one asked for this, but I thought someone might find this data useful. If you use this for something interesting, I’d love to hear about it.
A few other bits and pieces to be aware of.
/docs
. It would make sense to change that so it’s generated from the contents of that directoryBut there you are. That’s the system that I knocked together in a few hours a couple of weeks ago. As I mentioned in the last post, the idea was to make the PPC process more transparent to the Perl community outside of the Perl 5 Porters and the Perl Steering Council. I hope it achieves that and, further, I hope it does so in a way that keeps out of people’s way. As soon as someone updates one of the documents in the repository, the workflow will kick in and publish a new version of the website. There are a few grungy corners of the code and there are certainly some improvements that can be made. I’m hoping that once the pull request is merged, people will start proposing new pull requests to add new features.
The post Proposed Perl Changes (part 2) appeared first on Perl Hacks.
Many thanks to Dave Cross for providing an initial implementation of a PPC index page.
Maybe I should explain that in a little more detail. There’s a lot of detail, so it will take a couple of blog posts.
About two weeks ago, I got a message on Slack from Phillippe Bruhat, a member of the Perl Steering Council. He asked if I would have time to look into building a simple static site based on the GitHub repo that stores the PPCs that are driving a lot of Perl’s development. The PSC thought that reading these important documents on a GitHub page wasn’t a great user experience and that turning it into a website might lead to more people reading the proposals and, hence, getting involved in discussions about them.
I guess they had thought of me as I’ve written a bit about GitHub Pages and GitHub Actions over the last few years and these were exactly the technologies that would be useful in this project. In fact, I have already created a website that fulfills a similar role for the PSC meeting minutes – and I know they know about that site because they’ve been maintaining it themselves for several months.
I was about to start working with a new client, but I had a spare day, so I said I’d be happy to help. And the following day, I set to work.
Reviewing the situation
I started by looking at what was in the repo.
All of these documents were in Markdown format. The PPCs seemed to have a pretty standardised format.
Setting a target
Next, I listed what would be essential parts of the new site.
This is exactly the kind of use case that a combination of GitHub Pages and GitHub Actions is perfect for. Perhaps it’s worth briefly describing what those two GitHub features are.
Introducing GitHub Pages
GitHub Pages is a way to run a website from a GitHub repo. The feature was initially introduced to make it easy to run a project website alongside your GitHub repo – with the files that make up the website being stored in the same repo as the rest of your code. But, as often happens with useful features, people have been using the feature for all sorts of websites. The only real restriction is that it only supports static sites – you cannot use GitHub’s servers to run any kind of back-end processing.
The simplest way to run a GitHub Pages website is to construct it manually, put the HTML, CSS and other files into a directory inside your repo called /docs, commit those files and go to the “Settings -> Pages” settings for your repo to turn on Pages for the repo. Within minutes your site will appear at the address USERNAME.github.repo/REPONAME. Almost no-one uses that approach.
The most common approach is to use a static site builder to build your website. The most popular is Jekyll – which is baked into the GitHub Pages build/deploy cycle. You edit Markdown files and some config files. Then each time you commit a change to the repo, GitHub will automatically run Jekyll over your input files, generate your website and deploy that to its web servers. We’re not going to do that.
We’ll use the approach I’ve used for many GitHub Pages sites. We’ll use GitHub Actions to do the equivalent of the “running Jekyll over your input files to generate your website” step. This gives us more flexibility and, in particular, allows us to generate the website using Perl.
Introducing GitHub Actions
GitHub Actions is another feature that was introduced with one use case in mind but which has expanded to be used for an incredible range of ideas. It was originally intended for CI/CD – a replacement for systems like Jenkins or Travis CI – but that only accounts for about half of the things I use it for.
A GitHub Actions run starts in response to various triggers. You can then run pretty much any code you want on a virtual machine, generating useful reports, updating databases, releasing code or (as in this case) generating a website.
GitHub Actions is a huge subject (luckily, there’s a book!) We’re only going to touch on one particular way of using it. Our workflow will be:
Making a start
Let’s make a start on creating a GitHub Actions workflow to deal with this. Workflows are defined in YAML files that live in the .github/workflows directory in our repo. So I created the relevant directories and a file called buildsite.yml.
There will be various sections in this file. We’ll start simply by defining a name for this workflow:
name: Generate website
The next section tells GitHub when to trigger this workflow. We want to run it when a commit is pushed to the “main” branch. We’ll also add the “workflow_dispatch” trigger, which allows us to manually trigger the workflow – it adds a button to the workflow’s page inside the repo:
on: push: branches: 'main' workflow_dispatch:
The main part of the workflow definition is the next section – the one that defines the jobs and the individual steps within them. The start of that section looks like this:
jobs: build: runs-on: ubuntu-latest container: perl:latest steps: - name: Perl version run: perl -v - name: Checkout uses: actions/checkout@v4
The “build” there is the name of the first job. You can name jobs anything you like – well anything that can be the name of a valid YAML key. We then define the working environment for this job – we’re using a Ubuntu virtual machine and on that, we’re going to download and run the latest Perl container from the Docker Hub.
The first step isn’t strictly necessary, but I like to have a simple but useful step to ensure that everything is working. This one just prints the Perl version to the workflow log. The second step is one you’ll see in just about every GitHub Actions workflow. It uses a standard, prepackaged library (called an “action”) to clone the repo to the container.
The rest of this job will make much more sense once I’ve described the actual build process in my next post. But here it is for completeness:
- name: Install pandoc and cpanm run: apt-get update && apt-get install -y pandoc cpanminus - name: Install modules run: | cpanm --installdeps --notest . - name: Get repo name into environment run: | echo "REPO_NAME=${GITHUB_REPOSITORY#$GITHUB_REPOSITORY_OWNER/}" >> $GITHUB_ENV - name: Create pages env: PERL5LIB: lib GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }} run: | mkdir -p web perl bin/build $REPO_NAME - name: Update pages artifact uses: actions/upload-pages-artifact@v3 with: path: web/
Most of the magic (and all of the Perl – for those of you who were wondering) happens in the “Create pages” step. If you can’t wait until the next post, you can find the build program and the class it uses in the repo.
But for now, let’s skim over that and look at the final step in this job. That uses another pre-packaged action to build an artifact (which is just a tarball) which the next job will deploy to the GitHub Pages web server. You can pass it the name of a directory and it will build the artifact from that directory. So you can see that we’ll be building the web pages in the web/ directory.
The second (and final) job is the one that actually carries out the deployment. It looks like this:
deploy: needs: build permissions: pages: write id-token: write environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4
It uses another standard, pre-packaged action and most of the code here is configuration. One interesting line is the “need” key. That tells the workflow engine that the “build” job needs to have completed successfully before this job can be run.
But once it has run, the contents of our web/ directory will be on the GitHub Pages web server and available for our adoring public to read.
All that is left is for us to write the steps that will generate the website. And that is what we’ll be covering in my next post.
Oh, and if you want to preview the site itself, it’s at https://davorg.dev/PPCs/ and there’s an active pull request to merge it into the main repo.
The post Proposed Perl Changes first appeared on Perl Hacks.
Many thanks to Dave Cross for providing an initial implementation of a PPC index page.
Maybe I should explain that in a little more detail. There’s a lot of detail, so it will take a couple of blog posts.
About two weeks ago, I got a message on Slack from Phillippe Bruhat, a member of the Perl Steering Council. He asked if I would have time to look into building a simple static site based on the GitHub repo that stores the PPCs that are driving a lot of Perl’s development. The PSC thought that reading these important documents on a GitHub page wasn’t a great user experience and that turning it into a website might lead to more people reading the proposals and, hence, getting involved in discussions about them.
I guess they had thought of me as I’ve written a bit about GitHub Pages and GitHub Actions over the last few years and these were exactly the technologies that would be useful in this project. In fact, I have already created a website that fulfills a similar role for the PSC meeting minutes – and I know they know about that site because they’ve been maintaining it themselves for several months.
I was about to start working with a new client, but I had a spare day, so I said I’d be happy to help. And the following day, I set to work.
Reviewing the situation
I started by looking at what was in the repo.
All of these documents were in Markdown format. The PPCs seemed to have a pretty standardised format.
Setting a target
Next, I listed what would be essential parts of the new site.
This is exactly the kind of use case that a combination of GitHub Pages and GitHub Actions is perfect for. Perhaps it’s worth briefly describing what those two GitHub features are.
Introducing GitHub Pages
GitHub Pages is a way to run a website from a GitHub repo. The feature was initially introduced to make it easy to run a project website alongside your GitHub repo – with the files that make up the website being stored in the same repo as the rest of your code. But, as often happens with useful features, people have been using the feature for all sorts of websites. The only real restriction is that it only supports static sites – you cannot use GitHub’s servers to run any kind of back-end processing.
The simplest way to run a GitHub Pages website is to construct it manually, put the HTML, CSS and other files into a directory inside your repo called /docs, commit those files and go to the “Settings -> Pages” settings for your repo to turn on Pages for the repo. Within minutes your site will appear at the address USERNAME.github.repo/REPONAME. Almost no-one uses that approach.
The most common approach is to use a static site builder to build your website. The most popular is Jekyll – which is baked into the GitHub Pages build/deploy cycle. You edit Markdown files and some config files. Then each time you commit a change to the repo, GitHub will automatically run Jekyll over your input files, generate your website and deploy that to its web servers. We’re not going to do that.
We’ll use the approach I’ve used for many GitHub Pages sites. We’ll use GitHub Actions to do the equivalent of the “running Jekyll over your input files to generate your website” step. This gives us more flexibility and, in particular, allows us to generate the website using Perl.
Introducing GitHub Actions
GitHub Actions is another feature that was introduced with one use case in mind but which has expanded to be used for an incredible range of ideas. It was originally intended for CI/CD – a replacement for systems like Jenkins or Travis CI – but that only accounts for about half of the things I use it for.
A GitHub Actions run starts in response to various triggers. You can then run pretty much any code you want on a virtual machine, generating useful reports, updating databases, releasing code or (as in this case) generating a website.
GitHub Actions is a huge subject (luckily, there’s a book!) We’re only going to touch on one particular way of using it. Our workflow will be:
Making a start
Let’s make a start on creating a GitHub Actions workflow to deal with this. Workflows are defined in YAML files that live in the .github/workflows directory in our repo. So I created the relevant directories and a file called buildsite.yml.
There will be various sections in this file. We’ll start simply by defining a name for this workflow:
name: Generate website
The next section tells GitHub when to trigger this workflow. We want to run it when a commit is pushed to the “main” branch. We’ll also add the “workflow_dispatch” trigger, which allows us to manually trigger the workflow – it adds a button to the workflow’s page inside the repo:
on:
push:
branches: 'main'
workflow_dispatch:
The main part of the workflow definition is the next section – the one that defines the jobs and the individual steps within them. The start of that section looks like this:
jobs:
build:
runs-on: ubuntu-latest
container: perl:latest
steps:
- name: Perl version
run: perl -v
- name: Checkout
uses: actions/checkout@v4
The “build” there is the name of the first job. You can name jobs anything you like – well anything that can be the name of a valid YAML key. We then define the working environment for this job – we’re using a Ubuntu virtual machine and on that, we’re going to download and run the latest Perl container from the Docker Hub.
The first step isn’t strictly necessary, but I like to have a simple but useful step to ensure that everything is working. This one just prints the Perl version to the workflow log. The second step is one you’ll see in just about every GitHub Actions workflow. It uses a standard, prepackaged library (called an “action”) to clone the repo to the container.
The rest of this job will make much more sense once I’ve described the actual build process in my next post. But here it is for completeness:
- name: Install pandoc and cpanm
run: apt-get update && apt-get install -y pandoc cpanminus
- name: Install modules
run: |
cpanm --installdeps --notest .
- name: Get repo name into environment
run: |
echo "REPO_NAME=${GITHUB_REPOSITORY#$GITHUB_REPOSITORY_OWNER/}" >> $GITHUB_ENV
- name: Create pages
env:
PERL5LIB: lib
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: |
mkdir -p web
perl bin/build $REPO_NAME
- name: Update pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: web/
Most of the magic (and all of the Perl – for those of you who were wondering) happens in the “Create pages” step. If you can’t wait until the next post, you can find the build program and the class it uses in the repo.
But for now, let’s skim over that and look at the final step in this job. That uses another pre-packaged action to build an artifact (which is just a tarball) which the next job will deploy to the GitHub Pages web server. You can pass it the name of a directory and it will build the artifact from that directory. So you can see that we’ll be building the web pages in the web/ directory.
The second (and final) job is the one that actually carries out the deployment. It looks like this:
deploy:
needs: build
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
It uses another standard, pre-packaged action and most of the code here is configuration. One interesting line is the “need” key. That tells the workflow engine that the “build” job needs to have completed successfully before this job can be run.
But once it has run, the contents of our web/ directory will be on the GitHub Pages web server and available for our adoring public to read.
All that is left is for us to write the steps that will generate the website. And that is what we’ll be covering in my next post.
Oh, and if you want to preview the site itself, it’s at https://davorg.dev/PPCs/ and there’s an active pull request to merge it into the main repo.
The post Proposed Perl Changes appeared first on Perl Hacks.
If you have a website, then it’s very likely that you would like as many people as possible to see it. One of the best tools for achieving that is to ensure that your site is returned close to the top of as many search results pages as possible.
In order to do that, you really have two targets:
The second item on the list is mostly about getting other websites on the same topic to link to you – and it is outside the scope of this post. In this post, I want to talk about a good way to ensure search engines know what your site is about.
Of course, the search engines have invested a lot of money in working that out for themselves. They scan the text on your site and processes it to extract the meaning. But there are various ways you can make it easier for them. And they like sites that make their lives easier.
One of the most powerful ways to achieve this is to add structured data to your site. That means adding extra mark-up to your web pages which explains what the page is about. On the Schema.org website, you can find dozens of “things” the you can describe in structured data – for example, here is the definition of the Person entity. Each entity has a number of (largely optional) properties which can be included in structured data about an object of that type. Each property can be a string or another structured data entity. Additionally, entities are arranged in a hierarchy, so one entity can be based on another, more generic, entity. A Person, for example, inherits all of the properties of a Thing (which is the most generic type of entity). This is a lot like inheritance in Object-Oriented Programming.
Perhaps most usefully, the definition of each entity type ends with some examples of how structured data about an entity of that type could be added to an HTML document. The examples cover three formats:
Because it is completely separate to the existing mark-up, I find JSON-LD to be easier to work with. And for that reason, I wrote MooX::Role::JSON_LD which makes it easy to generate JSON-LD for classes that are based on Moo or Moose. Let’s look at a simple example of using it to add Person JSON-LD to a web page of a person. We’ll assume we already have a Person class that we use to provide the data on a web page about a person. It has attributes first_name, last_name and birth_date.
We start with some configuration. We load the role and define two subroutines which tell us which entity type we’re working with and which attributes we want to include in the JSON-LD. The code might look like this:
with 'MooX::Role::JSON_LD'; sub json_ld_type { 'Person' }; sub json_ld_fields { [ qw[ first_name last_name birth_date ] ] };
We can now use our Person class like this:
use Person; my $bowie = Person->new({ first_name => 'David', last_name => 'Bowie', birth_date => '1947-01-08', }); say $bowie->json_ld;
This produces the following output:
{ "@context" : "http://schema.org/", "@type" : "Person", "first_name" : "David", "last_name" : "Bowie", "birth_date" : "1947-01-08" }
This looks pretty good. But, sadly, it’s not valid JSON-LD. In the Schema.org Person entity, the relevant properties are called “givenName”, “familyName” and “birthDate”. Obviously, if we were designing our class from scratch, we could create attributes with those names. But often we’re adding features to existing systems and we don’t have that luxury. So the role allows us to change the names of attributes before they appear in the JSON-LD. We need to look more closely at the json_ld_fields() subroutine. It defines the names of the attributes that will appear in the JSON-LD. It returns an array reference and each element of the array contains a string which is the name of an attribute. But one of these elements can also contain a hash reference. In that case, the key of the hash is the name of the property we want to appear in the JSON-LD and the value is the name of the matching attribute in our class. So we can redefine our subroutine to look like this:
sub json_ld_fields { [ { givenName => 'first_name' }, { familyName => 'last_name' }, { birthDate => 'birth_date' }, ] }
And now we get the following JSON-LD:
{ "@context" : "http://schema.org/", "@type" : "Person", "givenName" : "David", "familyName" : "Bowie", "birthDate" : "1947-01-08" }
Which is now valid.
There’s one other trick we can use. We’ve seen the Schema.org Person entity has a “firstName” and “lastName” properties which map directly onto our “first_name” and “last_name” attributes. But the Person entity inherits from the Thing entity and that has a property called “name” which might be more useful for us. So perhaps we want to combine the “first_name” and “last_name” attributes into the single JSON-LD property. We can do that by changing our json_ld_fields() subroutine again:
sub json_ld_fields { [ { birthDate => 'birth_date'}, { name => sub { $_[0]->first_name . ' ' . $_[0]->last_name} }, ] }
In this version, we’ve added the “name” as the key of a hashref and the value is an anonymous subroutine that is passed the object and returns the name by concatenating the first and last names separated by a space. We now get this JSON-LD:
{ "@context" : "http://schema.org/", "@type" : "Person", "birthDate" : "1947-01-08" "name" : "David Bowie", }
Using this approach, allows us to build arbitrary JSON-LD properties from a combination of attributes from our object’s attributes.
Let’s look at a real-world example (and the reason why I was reminded of this module’s existence earlier this week.
I have a website called ReadABooker. It’s about the books that compete for the Booker Prize. Each year, a shortlist of six novels is announced and, later in the year, a winner is chosen. The winning author gets £50,000 and all of the shortlisted novels get massively increased sales. It’s a big deal in British literary circles. I created the website a few years ago. It lists all of the events (the competition goes back to 1969) and for each year, it lists all of the shortlisted novels. You can also see all of the authors who have been shortlisted and which of their shortlisted novels have won the prize. Each novel has a “Buy on Amazon” button and that link includes my associate ID – so, yes, it’s basically an attempt to make money out of people who want to buy Booker shortlisted novels.
But it’s not working. It’s not working because not enough people know about the site. So last week I decided to do a bit of SEO work on the site. And the obvious improvement was to add JSON-LD for the book and author pages.
The site itself is fully static. It gets updated twice a year – once when the shortlist is announced and then again when the winner is announced (the second update is literally setting a flag on a database row). The data about the novels is stored in an SQLite database. And there are DBIx::Class classes that allow me to access that data. So the obvious place to add the JSON-LD code is in Booker::Schema::Result::Book and Booker::Schema::Result::Person (a person can exist in the database if they have been an author, a judge or both).
The changes for the Person class were trivial. I don’t actually hold much information about the people in the database.
with 'MooX::Role::JSON_LD'; sub json_ld_type { 'Person' } sub json_ld_fields { [ qw/name/, ]; }
The changes in the Book class have one interesting piece of code:
with 'MooX::Role::JSON_LD'; sub json_ld_type { 'Book' } sub json_ld_fields { [ { name => 'title' }, { author => sub { $_[0]->author->json_ld_data } }, { isbn => 'asin' }, ]; }
The link between a book and its author is obviously important. But in the database, that link is simply represented by a foreign key in the book table. Having something like “author : 23” in the JSON-LD would be really unhelpful, so we take advantage of the link between the book and the author that DBIx::Class has given us and call the json_ld_data() method on the book’s author object. This method (which is added to any class that uses the role) returns the raw data structure which is later passed to a JSON encoder to produce the JSON-LD. So by calling that method inside the anonymous subroutine that creates the “author” attribute we can reuse that data in our book data.
The Person class creates JSON-LD like this:
{ "@context" : "http://schema.org/", "@type" : "Person", "name" : "Theresa Mary Anne Smith" }
And the Book class creates JSON-LD like this:
{ "@context" : "http://schema.org/", "@type" : "Book", "author" : { "@context" : "http://schema.org/", "@type" : "Person", "name" : "Theresa Mary Anne Smith" }, "isbn" : "B086PB2X8F", "name" : "Office Novice" }
There were two more changes needed. We needed to get the JSON-LD actually onto the HTML pages. The site is created using the Template Toolkit and the specific templates are author.html.tt and title.html.tt. Adding the JSON-LD to these pages was as simple as adding one line to each template:
[% author.json_ld_wrapped -%]
And
[% book.json_ld_wrapped -%]
We haven’t mentioned the json_ld_wrapped() method yet. Let me explain the hierarchy of the three main methods that the role adds to a class.
And that’s how I added JSON-LD to my website pretty easily. I now need to wait and see just how effective these changes will be. Hopefully thousands of people will be buying books through my site in the coming weeks and I can sit back and stop having to write code for a living.
It’s the dream!
How about you? Which of your websites would benefit from the addition of a few carefully-crafted pieces of JSON-LD?
The post Adding structured data with Perl first appeared on Perl Hacks.
If you have a website, then it’s very likely that you would like as many people as possible to see it. One of the best tools for achieving that is to ensure that your site is returned close to the top of as many search results pages as possible.
In order to do that, you really have two targets:
The second item on the list is mostly about getting other websites on the same topic to link to you – and it is outside the scope of this post. In this post, I want to talk about a good way to ensure search engines know what your site is about.
Of course, the search engines have invested a lot of money in working that out for themselves. They scan the text on your site and processes it to extract the meaning. But there are various ways you can make it easier for them. And they like sites that make their lives easier.
One of the most powerful ways to achieve this is to add structured data to your site. That means adding extra mark-up to your web pages which explains what the page is about. On the Schema.org website, you can find dozens of “things” the you can describe in structured data – for example, here is the definition of the Person entity. Each entity has a number of (largely optional) properties which can be included in structured data about an object of that type. Each property can be a string or another structured data entity. Additionally, entities are arranged in a hierarchy, so one entity can be based on another, more generic, entity. A Person, for example, inherits all of the properties of a Thing (which is the most generic type of entity). This is a lot like inheritance in Object-Oriented Programming.
Perhaps most usefully, the definition of each entity type ends with some examples of how structured data about an entity of that type could be added to an HTML document. The examples cover three formats:
Because it is completely separate to the existing mark-up, I find JSON-LD to be easier to work with. And for that reason, I wrote MooX::Role::JSON_LD which makes it easy to generate JSON-LD for classes that are based on Moo or Moose. Let’s look at a simple example of using it to add Person JSON-LD to a web page of a person. We’ll assume we already have a Person class that we use to provide the data on a web page about a person. It has attributes first_name, last_name and birth_date.
We start with some configuration. We load the role and define two subroutines which tell us which entity type we’re working with and which attributes we want to include in the JSON-LD. The code might look like this:
with 'MooX::Role::JSON_LD';
sub json_ld_type { 'Person' };
sub json_ld_fields { [qw[ first_name last_name birth_date] ] };
We can now use our Person class like this:
use Person;
my $bowie = Person->new({
first_name => 'David',
last_name => 'Bowie',
birth_date => '1947-01-08',
});
say $bowie->json_ld;
This produces the following output:
{
"@context" : "http://schema.org/",
"@type" : "Person",
"first_name" : "David",
"last_name" : "Bowie",
"birth_date" : "1947-01-08"
}
This looks pretty good. But, sadly, it’s not valid JSON-LD. In the Schema.org Person entity, the relevant properties are called “givenName”, “familyName” and “birthDate”. Obviously, if we were designing our class from scratch, we could create attributes with those names. But often we’re adding features to existing systems and we don’t have that luxury. So the role allows us to change the names of attributes before they appear in the JSON-LD. We need to look more closely at the json_ld_fields() subroutine. It defines the names of the attributes that will appear in the JSON-LD. It returns an array reference and each element of the array contains a string which is the name of an attribute. But one of these elements can also contain a hash reference. In that case, the key of the hash is the name of the property we want to appear in the JSON-LD and the value is the name of the matching attribute in our class. So we can redefine our subroutine to look like this:
sub json_ld_fields {
[
{ givenName => 'first_name' },
{ familyName => 'last_name' },
{ birthDate => 'birth_date' },
]
}
And now we get the following JSON-LD:
{
"@context" : "http://schema.org/",
"@type" : "Person",
"givenName" : "David",
"familyName" : "Bowie",
"birthDate" : "1947-01-08"
}
Which is now valid.
There’s one other trick we can use. We’ve seen the Schema.org Person entity has a “firstName” and “lastName” properties which map directly onto our “first_name” and “last_name” attributes. But the Person entity inherits from the Thing entity and that has a property called “name” which might be more useful for us. So perhaps we want to combine the “first_name” and “last_name” attributes into the single JSON-LD property. We can do that by changing our json_ld_fields() subroutine again:
sub json_ld_fields {
[
{ birthDate => 'birth_date'},
{ name => sub { $_[0]->first_name . ' ' . $_[0]->last_name} },
]
}
In this version, we’ve added the “name” as the key of a hashref and the value is an anonymous subroutine that is passed the object and returns the name by concatenating the first and last names separated by a space. We now get this JSON-LD:
{
"@context" : "http://schema.org/",
"@type" : "Person",
"birthDate" : "1947-01-08"
"name" : "David Bowie",
}
Using this approach, allows us to build arbitrary JSON-LD properties from a combination of attributes from our object’s attributes.
Let’s look at a real-world example (and the reason why I was reminded of this module’s existence earlier this week.
I have a website called ReadABooker. It’s about the books that compete for the Booker Prize. Each year, a shortlist of six novels is announced and, later in the year, a winner is chosen. The winning author gets £50,000 and all of the shortlisted novels get massively increased sales. It’s a big deal in British literary circles. I created the website a few years ago. It lists all of the events (the competition goes back to 1969) and for each year, it lists all of the shortlisted novels. You can also see all of the authors who have been shortlistedand which of their shortlisted novels have won the prize. Each novel has a “Buy on Amazon” button and that link includes my associate ID – so, yes, it’s basically an attempt to make money out of people who want to buy Booker shortlisted novels.
But it’s not working. It’s not working because not enough people know about the site. So last week I decided to do a bit of SEO work on the site. And the obvious improvement was to add JSON-LD for the book and author pages.
The site itself is fully static. It gets updated twice a year – once when the shortlist is announced and then again when the winner is announced (the second update is literally setting a flag on a database row). The data about the novels is stored in an SQLite database. And there are DBIx::Class classes that allow me to access that data. So the obvious place to add the JSON-LD code is in Booker::Schema::Result::Book and Booker::Schema::Result::Person (a person can exist in the database if they have been an author, a judge or both).
The changes for the Person class were trivial. I don’t actually hold much information about the people in the database.
with 'MooX::Role::JSON_LD';
sub json_ld_type { 'Person' }
sub json_ld_fields {
[
qw/name/,
];
}
The changes in the Book class have one interesting piece of code:
with 'MooX::Role::JSON_LD';
sub json_ld_type { 'Book' }
sub json_ld_fields {
[
{ name => 'title' },
{ author => sub {
$_[0]->author->json_ld_data }
},
{ isbn => 'asin' },
];
}
The link between a book and its author is obviously important. But in the database, that link is simply represented by a foreign key in the book table. Having something like “author : 23” in the JSON-LD would be really unhelpful, so we take advantage of the link between the book and the author that DBIx::Class has given us and call the json_ld_data() method on the book’s author object. This method (which is added to any class that uses the role) returns the raw data structure which is later passed to a JSON encoder to produce the JSON-LD. So by calling that method inside the anonymous subroutine that creates the “author” attribute we can reuse that data in our book data.
The Person class creates JSON-LD like this:
{
"@context" : "http://schema.org/",
"@type" : "Person",
"name" : "Theresa Mary Anne Smith"
}
And the Book class creates JSON-LD like this:
{
"@context" : "http://schema.org/",
"@type" : "Book",
"author" : {
"@context" : "http://schema.org/",
"@type" : "Person",
"name" : "Theresa Mary Anne Smith"
},
"isbn" : "B086PB2X8F",
"name" : "Office Novice"
}
There were two more changes needed. We needed to get the JSON-LD actually onto the HTML pages. The site is created using the Template Toolkit and the specific templates are author.html.tt and title.html.tt. Adding the JSON-LD to these pages was as simple as adding one line to each template:
[% author.json_ld_wrapped -%]
And
[% book.json_ld_wrapped -%]
We haven’t mentioned the json_ld_wrapped() method yet. Let me explain the hierarchy of the three main methods that the role adds to a class.
And that’s how I added JSON-LD to my website pretty easily. I now need to wait and see just how effective these changes will be. Hopefully thousands of people will be buying books through my site in the coming weeks and I can sit back and stop having to write code for a living.
It’s the dream!
How about you? Which of your websites would benefit from the addition of a few carefully-crafted pieces of JSON-LD?
The post Adding structured data with Perl appeared first on Perl Hacks.
The London Perl Mongers have had a website for a very long time. Since some time in 1998, I think. At first, I hosted a static site for us. Later on, we bought our own server and hosted it at a friendly company around Silicon Roundabout. But for most of the lifetime of the organisation, it’s been hosted on a server donated to us by Exonetric (for which we are extremely grateful).
But all good things come to an end. And last week, we got an email saying the Exonetric was closing down and we would need to find alternative hosting by the end of February.
The code for the site is on GitHub, so I had a quick look at it to see if there was anything easy we could do.
I was slightly surprised to find it was a PSGI application. Albeit a really simple PSGI application that basically served content from a /root directory, having passed it through some light Template Toolkit processing first. Converting this to a simple static site that could be hosted on GitHub Pages was going to be simple.
Really, all it needed was a ttree configuration file that reads all of the files from /root, processes them and writes the output to /docs. The configuration file I created looked like this:
src = root dest = docs copy = \.(gif|png|jpg|pdf|css|js)$ copy = ^CNAME$ recurse verbose
To be honest, most of the static web site work I do these days uses a static site builder that’s rather more complex than that, so it was really refreshing to remind myself that you can do useful things with tools as simple as ttree.
The next step was to add a GitHub Actions workflow that publishes the site to the GitHub Pages server each time something changes. That’s all pretty standard stuff too:
name: Generate web page on: push: branches: 'master' workflow_dispatch: jobs: build: if: github.repository_owner == 'LondonPM' runs-on: ubuntu-latest steps: - name: Install TT run: | sudo apt-get update sudo apt-get -y install libtemplate-perl - name: Checkout uses: actions/checkout@v4 - name: Create pages run: ttree -f ttreerc 2>&1 > ttree.log - name: Archive ttree logs uses: actions/upload-artifact@v4 with: name: ttree.log path: ./ttree.log retention-days: 3 - name: Update pages artifact uses: actions/upload-pages-artifact@v3 with: path: docs/ deploy: needs: build if: github.repository_owner == 'LondonPM' permissions: pages: write id-token: write environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v4
The only slightly complex lines here are the two lines that say if: github.repository_owner == 'LondonPM'. We’re hoping that other people will fork this repo in order to work on the site, but it’s only the main fork that should attempt to publish the current version on the GitHub Pages servers.
There was a bit of fiddling with DNS. Temporarily, we used the domain londonperl.com as a test deployment (because I’m the kind of person who just happens to have potentially useful domains lying around, unused!) but enough of us are slightly obsessed about using the correct TLD so we’ve settled on londonperl.org[*]. We’ve asked the nice people at the Perl NOC to redirect our old domain to the new one.
And it’s all working (well, with the exception of the redirection of the old domain). Thanks to Sue, Lee and Leo for the work they’ve done in the last few days to get it all working. And a big thanks to Mark and Exonetric for hosting the site for us for the last couple of decades.
These changes are already having the desired effect. People are submitting pull requests to update the website. Our website is probably more up-to-date than it has been for far too long. It’s even responsive now.
I realise there has been very little Perl in this post. But I thought it might be useful for other Perl Mongers groups who are looking for a simple (and free!) space to host their websites. Please let me know if you have any questions about the process.
[*] We wanted to use Cloudflare to manage the domain but their free service only supports top-level domains and london.pm.org (our original domain) is a subdomain – and none of us wanted to pay for the enterprise version.
The post London Perl Mongers on GitHub Pages first appeared on Perl Hacks.
The London Perl Mongers have had a website for a very long time. Since some time in 1998, I think. At first, I hosted a static site for us. Later on, we bought our own server and hosted it at a friendly company around Silicon Roundabout. But for most of the lifetime of the organisation, it’s been hosted on a server donated to us by Exonetric (for which we are extremely grateful).
But all good things come to an end. And last week, we got an email saying the Exonetric was closing down and we would need to find alternative hosting by the end of February.
The code for the site is on GitHub, so I had a quick look at it to see if there was anything easy we could do.
I was slightly surprised to find it was a PSGI application. Albeit a really simple PSGI application that basically served content from a /root directory, having passed it through some light Template Toolkit processing first. Converting this to a simple static site that could be hosted on GitHub Pages was going to be simple.
Really, all it needed was a ttree configuration file that reads all of the files from /root, processes them and writes the output to /docs. The configuration file I created looked like this:
src = root
dest = docs
copy = \.(gif|png|jpg|pdf|css|js)$
copy = ^CNAME$
recurse
verbose
To be honest, most of the static web site work I do these days uses a static site builder that’s rather more complex than that, so it was really refreshing to remind myself that you can do useful things with tools as simple as ttree.
The next step was to add a GitHub Actions workflow that publishes the site to the GitHub Pages server each time something changes. That’s all pretty standard stuff too:
name: Generate web page
on:
push:
branches: 'master'
workflow_dispatch:
jobs:
build:
if: github.repository_owner == 'LondonPM'
runs-on: ubuntu-latest
steps:
- name: Install TT
run: |
sudo apt-get update
sudo apt-get -y install libtemplate-perl
- name: Checkout
uses: actions/checkout@v4
- name: Create pages
run: ttree -f ttreerc 2>&1 > ttree.log
- name: Archive ttree logs
uses: actions/upload-artifact@v4
with:
name: ttree.log
path: ./ttree.log
retention-days: 3
- name: Update pages artifact
uses: actions/upload-pages-artifact@v3
with:
path: docs/
deploy:
needs: build
if: github.repository_owner == 'LondonPM'
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v4
The only slightly complex lines here are the two lines that say if: github.repository_owner == 'LondonPM'. We’re hoping that other people will fork this repo in order to work on the site, but it’s only the main fork that should attempt to publish the current version on the GitHub Pages servers.
There was a bit of fiddling with DNS. Temporarily, we used the domain londonperl.com as a test deployment (because I’m the kind of person who just happens to have potentially useful domains lying around, unused!) but enough of us are slightly obsessed about using the correct TLD so we’ve settled on londonperl.org[*]. We’ve asked the nice people at the Perl NOC to redirect our old domain to the new one.
And it’s all working (well, with the exception of the redirection of the old domain). Thanks to Sue, Lee and Leo for the work they’ve done in the last few days to get it all working. And a big thanks to Mark and Exonetric for hosting the site for us for the last couple of decades.
These changes are already having the desired effect. People are submitting pull requests to update the website. Our website is probably more up-to-date than it has been for far too long. It’s even responsive now.
I realise there has been very little Perl in this post. But I thought it might be useful for other Perl Mongers groups who are looking for a simple (and free!) space to host their websites. Please let me know if you have any questions about the process.
[*] We wanted to use Cloudflare to manage the domain but their free service only supports top-level domains and london.pm.org (our original domain) is a subdomain – and none of us wanted to pay for the enterprise version.
The post London Perl Mongers on GitHub Pages appeared first on Perl Hacks.
I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five…
I’ve been a member of Picturehouse Cinemas for something approaching twenty years. It costs about £60 a year and for that, you get five free tickets and discounts on your tickets and snacks. I’ve often wondered whether it’s worth paying for, but in the last couple of years, they’ve added an extra feature that makes it well worth the cost. It’s called Film Club and every week they have two curated screenings that members can see for just £1. On Sunday lunchtime, there’s a screening of an older film, and on a weekday evening (usually Wednesday at the Clapham Picturehouse), they show something new. I’ve got into the habit of seeing most of these screenings.
For most of the year, I’ve been considering a monthly post about the films I’ve seen at Film Club, but I’ve never got around to it. So, instead, you get an end-of-year dump of the almost eighty films I’ve seen.
The post Picturehouse Film Club appeared first on Davblog.
When I first wrote about my pointless personal side projects a few months ago, I used the software I had written to generate my own link site (like a LinkTree clone) as an example.
I’m happy to report that I’ve continued to work on this software. Recently, it passed another milestone—I released a version to CPAN. It’s called App::LinkSite[*]. If you’d like a Link Site of your own, there are a few ways you can achieve that.
In all cases, you’ll want to gather a few pieces of information first. I store mine in a GitHub repo[**].
Most importantly, you’ll need the list of links that you want to display on your site. These go in a file called “links.json“. There are two types of link.
There are also a few bits of header information you’ll want to add:
Put all of that information into “links.json” and put the images in a directory called “img”. Fuller documentation is in the README.
Now you get to decide how you’re going to build your site.
Installed CPAN module
You can install the module (App::LinkSite) using your favourite CPAN installation tool. Then you can just run the “linksite” command and your site will be written to the “docs” directory – which you can then deploy to the web in whatever way you prefer.
Docker image
I build a Docker image whenever I release a new version of the code. That image is released to the Docker hub. So if you like Docker, you can just pull down the “davorg/links:latest” image and go from there.
GitHub Actions and GitHub Pages
But this is my favourite approach. Let GitHub do all the heavy lifting for you. There’s a little bit of set-up you’ll need to do.
Now, whenever you change anything in your repo, your site will be rebuilt and redeployed automatically. There’s also a “run this workflow” under the “Actions” tab of your repo that allows you to run the build and deployment automatically whenever you want.
This is the mechanism I like best – as it’s the least amount of work!
If you try this, please let me know as I’d like to add an “Examples” section to the README file. Also, if you try it and have problems getting it working, then let me know too. It works for me, but I’m sure I’ve forgotten to cater for some specific complexity of how other people would like to use my software. I’m always happy to get suggestions on how to improve things – even if it’s just better documentation.
[*] My continued use of the new Perl class syntax still seems to be causing problems with the CPAN infrastructure. The distribution isn’t being indexed properly.
[**] This shouldn’t be too much of a surprise – I store pretty much everything in a GitHub repo.
The post A link site of your very own first appeared on Perl Hacks.
When I first wrote about my pointless personal side projects a few months ago, I used the software I had written to generate my own link site (like a LinkTree clone) as an example.
I’m happy to report that I’ve continued to work on this software. Recently, it passed another milestone—I released a version to CPAN. It’s called App::LinkSite[*]. If you’d like a Link Site of your own, there are a few ways you can achieve that.
In all cases, you’ll want to gather a few pieces of information first. I store mine in a GitHub repo[**].
Most importantly, you’ll need the list of links that you want to display on your site. These go in a file called “links.json“. There are two types of link.
There are also a few bits of header information you’ll want to add:
Put all of that information into “links.json” and put the images in a directory called “img”. Fuller documentation is in the README.
Now you get to decide how you’re going to build your site.
Installed CPAN module
You can install the module (App::LinkSite) using your favourite CPAN installation tool. Then you can just run the “linksite” command and your site will be written to the “docs” directory – which you can then deploy to the web in whatever way you prefer.
Docker image
I build a Docker image whenever I release a new version of the code. That image is released to the Docker hub. So if you like Docker, you can just pull down the “davorg/links:latest” image and go from there.
GitHub Actions and GitHub Pages
But this is my favourite approach. Let GitHub do all the heavy lifting for you. There’s a little bit of set-up you’ll need to do.
Now, whenever you change anything in your repo, your site will be rebuilt and redeployed automatically. There’s also a “run this workflow” under the “Actions” tab of your repo that allows you to run the build and deployment automatically whenever you want.
This is the mechanism I like best – as it’s the least amount of work!
If you try this, please let me know as I’d like to add an “Examples” section to the README file. Also, if you try it and have problems getting it working, then let me know too. It works for me, but I’m sure I’ve forgotten to cater for some specific complexity of how other people would like to use my software. I’m always happy to get suggestions on how to improve things – even if it’s just better documentation.
[*] My continued use of the new Perl class syntax still seems to be causing problems with the CPAN infrastructure. The distribution isn’t being indexed properly.
[**] This shouldn’t be too much of a surprise – I store pretty much everything in a GitHub repo.
The post A link site of your very own appeared first on Perl Hacks.
Royal titles in the United Kingdom carry a rich tapestry of history, embodying centuries of tradition while adapting to the changing landscape of the modern world. This article delves into the structure of these titles, focusing on significant changes made during the 20th and 21st centuries, and how these rules affect current royals.
The framework for today’s royal titles was significantly shaped by the Letters Patent issued by King George V in 1917. This document was pivotal in redefining who in the royal family would be styled with “His or Her Royal Highness” (HRH) and as a prince or princess. Specifically, the 1917 Letters Patent restricted these styles to:
This move was partly in response to the anti-German sentiment of World War I, aiming to streamline the monarchy and solidify its British identity by reducing the number of royals with German titles.
Notice that the definitions talk about “a sovereign”, not “the sovereign”. This means that when the sovereign changes, no-one will lose their royal title (for example, Prince Andrew is still the son of a sovereign, even though he is no longer the son of the sovereign). However, people can gain royal titles when the sovereign changes — we will see examples below.
Understanding the implications of the existing rules as his family grew, King George VI issued a new Letters Patent in 1948 to extend the style of HRH and prince/princess to the children of the future queen, Princess Elizabeth (later Queen Elizabeth II). This was crucial as, without this adjustment, Princess Elizabeth’s children would not automatically have become princes or princesses because they were not male-line grandchildren of the monarch. This ensured that Charles and Anne were born with princely status, despite being the female-line grandchildren of a monarch.
Queen Elizabeth II’s update to the royal titles in 2012 before the birth of Prince William’s children was another significant modification. The Letters Patent of 2012 decreed that all the children of the eldest son of the Prince of Wales would hold the title of HRH and be styled as prince or princess, not just the eldest son. This move was in anticipation of changes brought about by the Succession to the Crown Act of 2013, which ended the system of male primogeniture, ensuring that the firstborn child of the Prince of Wales, regardless of gender, would be the direct heir to the throne. Without this change, there could have been a situation where Prince William’s first child (and the heir to the throne) was a daughter who wasn’t a princess, whereas her eldest (but younger) brother would have been a prince.
As the royal family branches out, descendants become too distanced from the throne, removing their entitlement to HRH and princely status. For example, the Duke of Gloucester, Duke of Kent, Prince Michael of Kent and Princess Alexandra all have princely status as male-line grandchildren of George V. Their children are all great-grandchildren of a monarch and, therefore, do not all have royal styles or titles. This reflects a natural trimming of the royal family tree, focusing the monarchy’s public role on those directly in line for succession.
The evolution of British royal titles reflects both adherence to deep-rooted traditions and responsiveness to modern expectations. These titles not only delineate the structure and hierarchy within the royal family but also adapt to changes in societal norms and the legal landscape, ensuring the British monarchy remains both respected and relevant in the contemporary era.
Originally published at https://blog.lineofsuccession.co.uk on April 25, 2024.
Royal Titles Decoded: What Makes a Prince or Princess? — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
Originally published at https://blog.dave.org.uk on April 7, 2024.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
The post The Tourist appeared first on Davblog.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022–12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year — in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year — They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
Originally published at https://blog.dave.org.uk on December 31, 2023.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
The post 2023 in Gigs appeared first on Davblog.
Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…
Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.
Here are my ten favourite gigs of the year. As always, they’re in chronological order.
Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.
A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.
But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.
Did you see any great shows this year? Tell us about them in the comments.
The post 2022 in Gigs appeared first on Davblog.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.
Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.
Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.
Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.
In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.
[This post was generated using ChatGPT]
The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.