@davorg davorg pushed to master in davorg/uptime · April 25, 2024 00:24
1 commit to master
  • @upptime-bot f874ed8
    🍱 Update graphs [skip ci]
@davorg davorg pushed to master in davorg/uptime · April 25, 2024 00:06
2 commits to master
  • @upptime-bot fed0d0e
    🗃️ Update status summary [skip ci] [upptime]
  • @upptime-bot c144337
    📝 Update summary in README [skip ci] [upptime]
@davorg davorg pushed to master in davorg/uptime · April 24, 2024 23:01
2 commits to master
@davorg davorg pushed to master in davorg/planetwestminster · April 24, 2024 16:42
1 commit to master
@davorg davorg pushed to master in davorg/uptime · April 24, 2024 00:22
1 commit to master
  • @upptime-bot 046f1b1
    🍱 Update graphs [skip ci]
The view of the planet [AI-generated image]

Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.

There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.

Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.

Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.

She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.

Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.

An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.

The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.

Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.

And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.

My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.

That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.

Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.

Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.

Originally published at https://blog.dave.org.uk on April 7, 2024.

I gave my first public talk sometime between the 22nd and 24th September 2000. It was at the first YAPC::Europe which was held in London between those dates. I can’t be any more precise because the schedule is no longer online and memory fades.

I can, however, tell you that the talk was a disaster. I originally wasn’t planning to give a talk at all, but my first book was about to be published and the publishers thought that giving a talk about it to a room full of Perl programmers would be great marketing. I guess that makes sense. But what they didn’t take into account was the fact that I knew nothing about how to give an interesting talk. So I threw together a few bullet points taken from the contents of the book and wrote a simple Perl script to turn those bullet points into HTML slides (it was 2000 – that’s what everyone did). I gave absolutely no thought to what the audience might want to know or how I could tell a story to guide them through. It was a really dull talk. I’m sorry if you were in the audience. Oh, and add the fact that I was speaking after the natural raconteur, Charlie Stross and you can probably see why I’m eternally grateful that the videos we took of the conference never saw the light of day. I left the stage knowing for sure that public speaking was not for me and vowed that I would never give another talk.

But…

We were experimenting with a session of lightning talks at the conference and I had already volunteered to give a talk about my silly module Symbol::Approx::Sub. I didn’t feel that I could back out and, anyway, it was only five minutes. How bad could it be?

As it turns out, with Symbol::Approx::Sub I had stumbled on something that was simultaneously both funny and useful (well, the techniques are useful – obviously the module itself isn’t). And I accidentally managed to tell the story of the module engagingly and entertainingly. People laughed. And they clapped enthusiastically at the end. I immediately changed my mind about never speaking in public again. This was amazing. This was as close as I was ever going to get to playing on stage at the Hammersmith Odeon. This was addictive.

But something had to change. I had to get better at it. I had to work out how to give entertaining and useful talks that were longer than five minutes long. So I studied the subject of public speaking. The Perl community already had two great public speakers in Mark Dominus and Damian Conway and I took every opportunity to watch them speak and work out what they were doing. It helped that they both ran courses on how to be a better public speaker. I also read books on the topic and when TED talks started coming online I watched the most popular ones obsessively to work out what people were doing to give such engaging talks (it turns out the answer really boils down to – taking out most of the content!)

And I practiced. I don’t think there was a conference I went to between 2000 and 2020 where I didn’t give a talk. I’d never turn down an opportunity to speak at a Perl Mongers meeting. And. while I’m certainly not Damian Conway, I like to think I got better at it. I’d get pretty good scores whenever there was a feedback form.

All of which means that I’ve given dozens of talks over the last twenty-plus years. From lightning talks to all-day (actually, a couple of two-day) training sessions. I’ve tried to be organised about keeping copies of the slides from all of the talks I’ve given, but I fear a few decks have slipped through the cracks over the years. And, of course, there are plenty of videos of me giving various talks over that time.

I’ve been thinking for a while that it would be good to gather them all together on one site. And, a couple of weeks ago. I started prodding at the project. Today, it reached the stage where it’s (just barely) useable. It’s at talks.davecross.co.uk. Currently, it’s just a list of talk titles and it only covers the last five years or so (and for a lot of that time, there were no conferences or meetings to speak at). But having something out there will hopefully encourage me to expand it in two dimensions:

  • Adding descriptions of the talks along with embedded slides and video
  • Adding more talks

The second point is going to be fun. There will be some serious data archaeology going on. I think I can dig out details of all the YAPCs and LPWs I’ve spoken at – but can I really find details of every London Perl Mongers technical meeting? And there are some really obscure things in there – I’m pretty sure I spoke at a Belgian Perl Workshop once. And what was that Italian conference held in Ferrara just before the Mediterranean Perl Whirl? There’s a lot of digging around in the obscure corners of the web (and my hard disk!) in my near future.

Wish me luck.

The post Collecting talks first appeared on Perl Hacks.

I gave my first public talk sometime between the 22nd and 24th September 2000. It was at the first YAPC::Europe which was held in London between those dates. I can’t be any more precise because the schedule is no longer online and memory fades.

I can, however, tell you that the talk was a disaster. I originally wasn’t planning to give a talk at all, but my first book was about to be published and the publishers thought that giving a talk about it to a room full of Perl programmers would be great marketing. I guess that makes sense. But what they didn’t take into account was the fact that I knew nothing about how to give an interesting talk. So I threw together a few bullet points taken from the contents of the book and wrote a simple Perl script to turn those bullet points into HTML slides (it was 2000 – that’s what everyone did). I gave absolutely no thought to what the audience might want to know or how I could tell a story to guide them through. It was a really dull talk. I’m sorry if you were in the audience. Oh, and add the fact that I was speaking after the natural raconteur, Charlie Stross and you can probably see why I’m eternally grateful that the videos we took of the conference never saw the light of day. I left the stage knowing for sure that public speaking was not for me and vowed that I would never give another talk.

But…

We were experimenting with a session of lightning talks at the conference and I had already volunteered to give a talk about my silly module Symbol::Approx::Sub. I didn’t feel that I could back out and, anyway, it was only five minutes. How bad could it be?

As it turns out, with Symbol::Approx::Sub I had stumbled on something that was simultaneously both funny and useful (well, the techniques are useful – obviously the module itself isn’t). And I accidentally managed to tell the story of the module engagingly and entertainingly. People laughed. And they clapped enthusiastically at the end. I immediately changed my mind about never speaking in public again. This was amazing. This was as close as I was ever going to get to playing on stage at the Hammersmith Odeon. This was addictive.

But something had to change. I had to get better at it. I had to work out how to give entertaining and useful talks that were longer than five minutes long. So I studied the subject of public speaking. The Perl community already had two great public speakers in Mark Dominus and Damian Conway and I took every opportunity to watch them speak and work out what they were doing. It helped that they both ran courses on how to be a better public speaker. I also read books on the topic and when TED talks started coming online I watched the most popular ones obsessively to work out what people were doing to give such engaging talks (it turns out the answer really boils down to – taking out most of the content!)

And I practiced. I don’t think there was a conference I went to between 2000 and 2020 where I didn’t give a talk. I’d never turn down an opportunity to speak at a Perl Mongers meeting. And. while I’m certainly not Damian Conway, I like to think I got better at it. I’d get pretty good scores whenever there was a feedback form.

All of which means that I’ve given dozens of talks over the last twenty-plus years. From lightning talks to all-day (actually, a couple of two-day) training sessions. I’ve tried to be organised about keeping copies of the slides from all of the talks I’ve given, but I fear a few decks have slipped through the cracks over the years. And, of course, there are plenty of videos of me giving various talks over that time.

I’ve been thinking for a while that it would be good to gather them all together on one site. And, a couple of weeks ago. I started prodding at the project. Today, it reached the stage where it’s (just barely) useable. It’s at talks.davecross.co.uk. Currently, it’s just a list of talk titles and it only covers the last five years or so (and for a lot of that time, there were no conferences or meetings to speak at). But having something out there will hopefully encourage me to expand it in two dimensions:

  • Adding descriptions of the talks along with embedded slides and video
  • Adding more talks

The second point is going to be fun. There will be some serious data archaeology going on. I think I can dig out details of all the YAPCs and LPWs I’ve spoken at – but can I really find details of every London Perl Mongers technical meeting? And there are some really obscure things in there – I’m pretty sure I spoke at a Belgian Perl Workshop once. And what was that Italian conference held in Ferrara just before the Mediterranean Perl Whirl? There’s a lot of digging around in the obscure corners of the web (and my hard disk!) in my near future.

Wish me luck.

The post Collecting talks first appeared on Perl Hacks.

Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.

There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.

Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.

Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.

She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.

Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.

An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.

The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.

Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.

And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.

My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.

That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.

Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.

Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.

The post The Tourist appeared first on Davblog.

I’ve spent more than a reasonable amount of time thinking about Amazon links over the last three or four years.

It started with the Perl School web site. Obviously, I knew that the book page needed a link to Amazon – so people could buy the books if they wanted to – but that’s complicated by the fact that Amazon has so many different sites and I have no way of knowing which site is local to anyone who visits my web site. I had the same problem when I built a web site for George and the Smart Home. And again when I created a site for Will Sowman’s books. At some point soon, I’ll also want to put book pages on the Clapham Tech Press web site – and that will have exactly the same problem.

That’s the user-visible side of the equation. There are other reasons for wanting to know about all of the existing Amazon sites. One of the best ones is because I want to track royalties from the various sites and apportion them to the right authors.

On the Perl School site, I solved the problem by creating a database table which contains data about the sites that I knew about at the time. Then there’s a DBIC result class and that result set is passed to the book page template, which builds “buy” buttons for each site found in the result set. That works, but it’s not very portable. When it came to the other sites, I found myself writing a “make_buttons” program which used the Perl School database table to generate some HTML which I then copied into the relevant template.

But that never sat well with me. It made me uncomfortable that all of my book sites relied on a database table that existed in one of my repos that, really, has no connection to those other sites. I thought briefly about duplicating the table into the other repos, but that set off the “Don’t Repeat Yourself” alarm in my head, so I backed away from that idea pretty quickly.

It would be great if Amazon had an API for this information. But, unless I’m blind, it seems to be the only API that they don’t provide.

So, currently, what I’ve done is to encapsulate the data in a CPAN module. It’s called Amazon::Sites and I’ve been releasing slowly-improving versions of it over the last week or so – and it’s finally complete enough that I can use it to replace my database table. It might even make the code for my various book sites easier to maintain.

Maybe it will be useful to you too.

Here’s how you use it:

use Amazon::Sites;
 
my $sites = Amazon::Sites->new;
my @sites = $sites->sites;
my %sites = $sites->sites_hash;
my @codes = $sites->codes;
 
my $site  = $sites->site('UK');
say $site->currency; # GBP
say $site->tldr;     # co.uk
# etc
 
my %urls = $sites->asin_urls('XXXXXXX');
say $urls{UK}; # https://amazon.co.uk/dp/XXXXXXX

Once you’ve created a class of the object, you have access to a few useful methods:

  • sites – returns a list of all of the sites the object knows about. Each element in  the list is an Amazon::Site object
  • sites_hash – returns the same information, but as a hash. The key is a two-letter ISO country code and the value is an Amazon::Site object
  • codes – returns a list of all of the country codes that the object knows about
  • site(country_code) – expects a two-letter ISO country code and returns the Amazon::Site object for that country’s Amazon site

The Amazon::Site object has a number of useful attributes:

  • code – the country code
  • country – the country’s name in English
  • currency – the ISO code for the currency used on that site
  • tldn – the top-level domain name that the Amazon site uses (e.g. .com or .co.uk)
  • domain – the full domain that the Amazon site used (e.g. amazon.com or amazon.co.uk)

Amazon::Site also has a “asin_url()” method. You pass it an ASIN (that’s the unique identifier that Amazon uses for every product on its site) and it returns the full URL of that product on that site. There’s a similar “asin_urls()” (note the “s” at the end) on the Amazon::Sites object. That returns a hash of URLs for all of the sites the object knows about. The key is the country code and the value is the URL in that country.

You can also filter the list of Amazon sites that you’re interested in when creating your Amazon::Sites object. The constructor takes optional “include” and “exclude” arguments. Each of them is a reference to an array of ISO country codes. For reasons that are, I hope, obvious, you can only use one of those options at a time.

If you’re an Amazon Associate, you can make money by including your “associate code” in Amazon URLs that you share with people. Amazon::Sites deals with that too. An Amazon associate code is associated with one Amazon site. So the constructor method has an optional “assoc_codes” argument which is a hash mapping country codes to associate codes. If you have set up associate codes in your Amazon::Sites object, then your associate code will be included in any URLs that are generated by the modules – as long as the URL is for one of the sites that you have an associate code for.

That’s all it does at the moment. It addresses most of my needs. There’s one more feature I might add soon. I’d like to have template processing built-in – so if I have a template and an Amazon::Sites object, I can easily process that template for every site that the object knows about.

So that’s the class. I hope someone out there finds it useful. If you think it’s almost useful, but there’s a feature missing then please let me know (or even send a pull request).

But there are a couple of other things I’d like to mention about how I wrote this class.

Firstly, this is written using the new perlclass OO syntax. Specifically, it uses Feature::Compat::Class, so you can use it on versions of Perl back to 5.26. It’s true that the new syntax doesn’t have all the features that you’d get with something like Moose, but I love using it – and over the next few versions of Perl, it will only get better and better. If you haven’t tried the new syntax yet, then I recommend you have a look at it.

Secondly, this is the first new CPAN distribution I’ve written since I’ve had my subscription to GitHub Copilot. And I’m really impressed at how much faster I was using Copilot. As I said, I was using experimental new Perl syntax, so I was impressed at how well Copilot understood what I was doing. I lost count of the number of times I typed the name of a new method and Copilot instantly wrote the code for me – an 95% of the time the code it wrote was spot on. AI programming support is here and it’s good. If you’re not using it yet, then you’re losing out.

I’m told a good blog post needs a “call to action”. This one has three:

  1. Start using Perl’s new class syntax
  2. Look at GitHub Copilot and similar tools
  3. Please use my new module

The post Amazon Links and Buttons first appeared on Perl Hacks.

I’ve spent more than a reasonable amount of time thinking about Amazon links over the last three or four years.

It started with the Perl School web site. Obviously, I knew that the book page needed a link to Amazon – so people could buy the books if they wanted to – but that’s complicated by the fact that Amazon has so many different sites and I have no way of knowing which site is local to anyone who visits my web site. I had the same problem when I built a web site for George and the Smart Home. And again when I created a site for Will Sowman’s books. At some point soon, I’ll also want to put book pages on the Clapham Tech Press web site – and that will have exactly the same problem.

That’s the user-visible side of the equation. There are other reasons for wanting to know about all of the existing Amazon sites. One of the best ones is because I want to track royalties from the various sites and apportion them to the right authors.

On the Perl School site, I solved the problem by creating a database table which contains data about the sites that I knew about at the time. Then there’s a DBIC result class and that result set is passed to the book page template, which builds “buy” buttons for each site found in the result set. That works, but it’s not very portable. When it came to the other sites, I found myself writing a “make_buttons” program which used the Perl School database table to generate some HTML which I then copied into the relevant template.

But that never sat well with me. It made me uncomfortable that all of my book sites relied on a database table that existed in one of my repos that, really, has no connection to those other sites. I thought briefly about duplicating the table into the other repos, but that set off the “Don’t Repeat Yourself” alarm in my head, so I backed away from that idea pretty quickly.

It would be great if Amazon had an API for this information. But, unless I’m blind, it seems to be the only API that they don’t provide.

So, currently, what I’ve done is to encapsulate the data in a CPAN module. It’s called Amazon::Sites and I’ve been releasing slowly-improving versions of it over the last week or so – and it’s finally complete enough that I can use it to replace my database table. It might even make the code for my various book sites easier to maintain.

Maybe it will be useful to you too.

Here’s how you use it:

use Amazon::Sites;

my $sites = Amazon::Sites->new;
my @sites = $sites->sites;
my %sites = $sites->sites_hash;
my @codes = $sites->codes;

my $site = $sites->site('UK');
say $site->currency; # GBP
say $site->tldr; # co.uk
# etc

my %urls = $sites->asin_urls('XXXXXXX');
say $urls{UK}; # https://amazon.co.uk/dp/XXXXXXX

Once you’ve created a class of the object, you have access to a few useful methods:

  • sites – returns a list of all of the sites the object knows about. Each element in the list is an Amazon::Site object
  • sites_hash – returns the same information, but as a hash. The key is a two-letter ISO country code and the value is an Amazon::Site object
  • codes – returns a list of all of the country codes that the object knows about
  • site(country_code) – expects a two-letter ISO country code and returns the Amazon::Site object for that country’s Amazon site

The Amazon::Site object has a number of useful attributes:

  • code – the country code
  • country – the country’s name in English
  • currency – the ISO code for the currency used on that site
  • tldn – the top-level domain name that the Amazon site uses (e.g. .com or .co.uk)
  • domain – the full domain that the Amazon site used (e.g. amazon.com or amazon.co.uk)

Amazon::Site also has a “asin_url()” method. You pass it an ASIN (that’s the unique identifier that Amazon uses for every product on its site) and it returns the full URL of that product on that site. There’s a similar “asin_urls()” (note the “s” at the end) on the Amazon::Sites object. That returns a hash of URLs for all of the sites the object knows about. The key is the country code and the value is the URL in that country.

You can also filter the list of Amazon sites that you’re interested in when creating your Amazon::Sites object. The constructor takes optional “include” and “exclude” arguments. Each of them is a reference to an array of ISO country codes. For reasons that are, I hope, obvious, you can only use one of those options at a time.

If you’re an Amazon Associate, you can make money by including your “associate code” in Amazon URLs that you share with people. Amazon::Sites deals with that too. An Amazon associate code is associated with one Amazon site. So the constructor method has an optional “assoc_codes” argument which is a hash mapping country codes to associate codes. If you have set up associate codes in your Amazon::Sites object, then your associate code will be included in any URLs that are generated by the modules – as long as the URL is for one of the sites that you have an associate code for.

That’s all it does at the moment. It addresses most of my needs. There’s one more feature I might add soon. I’d like to have template processing built-in – so if I have a template and an Amazon::Sites object, I can easily process that template for every site that the object knows about.

So that’s the class. I hope someone out there finds it useful. If you think it’s almost useful, but there’s a feature missing then please let me know (or even send a pull request).

But there are a couple of other things I’d like to mention about how I wrote this class.

Firstly, this is written using the new perlclass OO syntax. Specifically, it uses Feature::Compat::Class, so you can use it on versions of Perl back to 5.26. It’s true that the new syntax doesn’t have all the features that you’d get with something like Moose, but I love using it – and over the next few versions of Perl, it will only get better and better. If you haven’t tried the new syntax yet, then I recommend you have a look at it.

Secondly, this is the first new CPAN distribution I’ve written since I’ve had my subscription to GitHub Copilot. And I’m really impressed at how much faster I was using Copilot. As I said, I was using experimental new Perl syntax, so I was impressed at how well Copilot understood what I was doing. I lost count of the number of times I typed the name of a new method and Copilot instantly wrote the code for me – an 95% of the time the code it wrote was spot on. AI programming support is here and it’s good. If you’re not using it yet, then you’re losing out.

I’m told a good blog post needs a “call to action”. This one has three:

  1. Start using Perl’s new class syntax
  2. Look at GitHub Copilot and similar tools
  3. Please use my new module

The post Amazon Links and Buttons appeared first on Perl Hacks.

I can’t be the only programmer who does this. You’re looking for an online service to fill some need in your life. You look at three or four competing products and they all get close but none of them do everything you want. Or maybe they do tick all the boxes but they cost that little bit more than you’re comfortable paying. After spending a few hours on your search that little voice pops up in your head with that phrase that you really don’t want to hear:

Maybe you should just write your own version. How hard can it be?

A couple of hours later, you have something that vaguely works, you’ve learned more than you thought there was to learn about some obscure corner of life and you’re the proud owner of another new domain.

Please tell me it’s not just me.

So today I’ve been working on my Linktree clone.

Honestly, I can’t remember what it was about Linktree or its existing clones that I didn’t like. I suspect it’s that I just wanted more control over my links page than a hosted service would give me. All I can be sure of is that in September 2022 I made the first commit to a project that, eighteen months later, I’m still maintaining and improving.

To be fair to myself, I didn’t buy a new domain. That means I’m getting better, right? The output is hosted at links.davecross.co.uk. I’m not even paying for hosting as it’s all hosted on GitHub Pages – it’s a static site that has occasional changes, so it’s perfect for GitHub Pages.

But I have spent quite a lot of time working on the code. Probably more than is reasonable for a web site that gets a dozen visits in a good month. Work on it seems to come in waves. I’ll go for months without touching it, and then I’ll spend a week or so working on it pretty much every day. Over the last 24 hours or so, I’ve passed an important milestone. Like all of these little side projects, this one started out as a largely unstructured code dump – as I worked to get it doing something that approximated the original goal. Then I’ll spend some time (months, usually) where fixes and improvements are implemented by hacking on the original horrible code. At some point. I’ll realise that I’m making things too difficult for myself and I’ll rewrite it (largely from scratch) to be better structured and easier to maintain. That’s where I got to today. The original single-file code dump has been rewritten into something that’s far nicer to work on. And as a side benefit, I’ve rewritten it all using Perl’s new, built-in object orientation features – which I’m loving.

Oh, and I guess that’s the upside of having little side projects like this – I get to try out new features like the new OO stuff in a no-pressure environment. And just spending time doing more programming has to make you a better programmer, right? And surely it’s just a matter of time before one of these projects takes off and turns me into a millionaire! I’m not saying for a minute that having pointless side projects is a bad idea. I’m just wondering how many pointless side projects are too many 🙂

So, that’s my guilty secret – I’m a serial writer of code that doesn’t really need to be written. What about you? How many pointless side projects do you have? And how much of your spare time do they use up?

The post Pointless personal side projects first appeared on Perl Hacks.

I can’t be the only programmer who does this. You’re looking for an online service to fill some need in your life. You look at three or four competing products and they all get close but none of them do everything you want. Or maybe they do tick all the boxes but they cost that little bit more than you’re comfortable paying. After spending a few hours on your search that little voice pops up in your head with that phrase that you really don’t want to hear:

Maybe you should just write your own version. How hard can it be?

A couple of hours later, you have something that vaguely works, you’ve learned more than you thought there was to learn about some obscure corner of life and you’re the proud owner of another new domain.

Please tell me it’s not just me.

So today I’ve been working on my Linktree clone.

Honestly, I can’t remember what it was about Linktree or its existing clones that I didn’t like. I suspect it’s that I just wanted more control over my links page than a hosted service would give me. All I can be sure of is that in September 2022 I made the first commit to a project that, eighteen months later, I’m still maintaining and improving.

To be fair to myself, I didn’t buy a new domain. That means I’m getting better, right? The output is hosted at links.davecross.co.uk. I’m not even paying for hosting as it’s all hosted on GitHub Pages – it’s a static site that has occasional changes, so it’s perfect for GitHub Pages.

But I have spent quite a lot of time working on the code. Probably more than is reasonable for a web site that gets a dozen visits in a good month. Work on it seems to come in waves. I’ll go for months without touching it, and then I’ll spend a week or so working on it pretty much every day. Over the last 24 hours or so, I’ve passed an important milestone. Like all of these little side projects, this one started out as a largely unstructured code dump – as I worked to get it doing something that approximated the original goal. Then I’ll spend some time (months, usually) where fixes and improvements are implemented by hacking on the original horrible code. At some point. I’ll realise that I’m making things too difficult for myself and I’ll rewrite it (largely from scratch) to be better structured and easier to maintain. That’s where I got to today. The original single-file code dump has been rewritten into something that’s far nicer to work on. And as a side benefit, I’ve rewritten it all using Perl’s new, built-in object orientation features – which I’m loving.

Oh, and I guess that’s the upside of having little side projects like this – I get to try out new features like the new OO stuff in a no-pressure environment. And just spending time doing more programming has to make you a better programmer, right? And surely it’s just a matter of time before one of these projects takes off and turns me into a millionaire! I’m not saying for a minute that having pointless side projects is a bad idea. I’m just wondering how many pointless side projects are too many 🙂

So, that’s my guilty secret – I’m a serial writer of code that doesn’t really need to be written. What about you? How many pointless side projects do you have? And how much of your spare time do they use up?

The post Pointless personal side projects appeared first on Perl Hacks.

The future is already here – it’s just not very evenly distributed
– William Gibson

The quotation above was used by Tim O’Reilly a lot around the time that Web 2.0 got going. Over recent months, I’ve had a few experiences that have made it clear to me that even the present isn’t particularly evenly distributed either. It’s always easy to find people still using technologies that we would consider archaic (and not in a rustic or hipster way).

We’ve known for twenty years that CGI is a bad idea. It’s almost ten years since CGI.pm was removed from Perl core. Surely, all of us are using something modern for web development these days.

Well, apparently not. CGI is alive and well and living on the fringes of the Perl community. I’ve come across it being used in some quite surprising places over the last year or so. I’m going to obfuscate some details in the following descriptions to, hopefully, prevent you (or, worse, the people involved) from recognising the companies involved.

  • I did some work for a spectacularly big (and I mean huge) consultancy company. They wanted to decommission some old servers – which involved moving some Perl CGI programs that no-one had looked at for about fifteen years. These programs were, of course, running vital bits of the business. Anyone who had ever edited them had left the company at least ten years earlier. They wanted to do it as quickly as possible and change as little of the code as possible. The code was incompatible with even vaguely modern versions of Perl, so much of the work involved installing old versions of Perl (along with Apache and even mod_perl) on new hardware running up-to-date operating systems.
  • I picked up two or three freelancing gigs on Fiverr. And for the first time in years, I found myself working with low-end, rented, shared servers. At least one of them was one of those situations where you don’t have root access and are extremely hampered by the lack of software.
  • A couple of weeks ago, I got an email to my SourceForge email address asking for help with nms Formmail (some readers may be young enough that they haven’t heard of Matt’s Script Archive or the London Perl Mongers rewrite of those programs into what we called “modern Perl” twenty years ago). The email asked if our Formmail supported anti-spam measures like SPF, DMARC and DKIM. It was nostalgic to recall how different the web was back in the days when every web site had a mail form, a guest book and a hit counter. I see that SourceForge have removed the nms web site. I doubt I’ll ever get the time to work out what happened to it.  [Update: I was wrong about that. It’s been so long since I’ve looked at the nms project that I had forgotten the URL. The web site is still there in all its early-2000s car-crash web design glory.]
  • The following day I saw a question on Stack Overflow about a “classic mailing script”. And, yes, it was nms Formmail again. This user had moved their web site to a new server and it had stopped working. We never got the error log content that we asked them for, but the user confirmed my suspicion that the new web server had a newer version of Perl – one that was released after CGI.pm was removed. The nms project had (for obvious reasons) made heavy use of the module and its removal from core Perl has rendered the nms programs unusable on cheap servers where the sysadmin has no knowledge of or interest in installing any Perl modules that aren’t part of the standard package. Sadly, this means that Matt Wright’s original versions (that were never updated to use CGI.pm) still work in environments where the nms versions are useless.

None of this should be taken as an argument that the nms project was wrong to use CGI.pm or that the Perl 5 Porters were wrong to remove it from the Perl standard library. I still support both decisions. I just found it a bit jarring to be reminded that while we’re all using PSGI or Mojolicious to write microservices in Perl that serve REST APIs that are developed and deployed in Docker containers, there are still people out there who are struggling to FTP code that was written in 1997 onto low-end shared hosting.

I think this state of affairs has two causes. Firstly (like the first client I mentioned above) some systems were set up when CGI was still in common use – and things haven’t changed since. These people get a sudden shock when they are forced to move to a more modern server for some reason. And then there are people like my Fiverr clients who install Perl CGI programs because that’s what they have always done and they don’t know that there is an alternative approach. Part of the problem there is, presumably, that Perl has meant badly-written CGI programs for a large proportion of the web’s existence and means anyone searching for information on this subject is likely to find pages and pages of advice telling them how to install CGI programs before they discover anything about PSGI or Docker. And I think there might be a solution to that problem (or, at least, a way to nudge the web in the right direction).

Over last weekend I was cataloguing subdomains (I know how to have fun!) and I found a web site that I had forgotten about. I had obviously been contemplating a very similar situation back in 2016.

The site is called Perl Web Advice. The intention was (is?) that it would be a definitive source of good advice about how to develop and deploy web applications written in Perl. I had only made tiny inroads into the task before something else apparently seemed more fun and the project was abandoned.

But there’s the start of a framework for the site. And, this week, I’ve given it a GitHub Actions workflow so it gets republished automatically whenever changes are pushed to the repo. I’ve even set up a Dockerfile to make it easy to use the static site generator that I’ve used for it. So perhaps the idea has merit. Once there’s a bit more useful content there I could see if I can remember any of my SEO knowledge and get it appearing in results where people are looking for advice on this topic.

I would, of course, be happy to consider contributions from other people. What do you think? Would you like to help me save people from the hell of CGI deployments?

The post The present isn’t evenly distributed either first appeared on Perl Hacks.

The future is already here – it’s just not very evenly distributed

– William Gibson

The quotation above was used by Tim O’Reilly a lot around the time that Web 2.0 got going. Over recent months, I’ve had a few experiences that have made it clear to me that even the present isn’t particularly evenly distributed either. It’s always easy to find people still using technologies that we would consider archaic (and not in a rustic or hipster way).

We’ve known for twenty years that CGI is a bad idea. It’s almost ten years since CGI.pm was removed from Perl core. Surely, all of us are using something modern for web development these days.

Well, apparently not. CGI is alive and well and living on the fringes of the Perl community. I’ve come across it being used in some quite surprising places over the last year or so. I’m going to obfuscate some details in the following descriptions to, hopefully, prevent you (or, worse, the people involved) from recognising the companies involved.

  • I did some work for a spectacularly big (and I mean huge ) consultancy company. They wanted to decommission some old servers – which involved moving some Perl CGI programs that no-one had looked at for about fifteen years. These programs were, of course, running vital bits of the business. Anyone who had ever edited them had left the company at least ten years earlier. They wanted to do it as quickly as possible and change as little of the code as possible. The code was incompatible with even vaguely modern versions of Perl, so much of the work involved installing old versions of Perl (along with Apache and even mod_perl) on new hardware running up-to-date operating systems.
  • I picked up two or three freelancing gigs on Fiverr. And for the first time in years, I found myself working with low-end, rented, shared servers. At least one of them was one of those situations where you don’t have root access and are extremely hampered by the lack of software.
  • A couple of weeks ago, I got an email to my SourceForge email address asking for help with nms Formmail (some readers may be young enough that they haven’t heard of Matt’s Script Archive or the London Perl Mongers rewrite of those programs into what we called “modern Perl” twenty years ago). The email asked if our Formmail supported anti-spam measures like SPF, DMARC and DKIM. It was nostalgic to recall how different the web was back in the days when every web site had a mail form, a guest book and a hit counter. I see that SourceForge have removed the nms web site. I doubt I’ll ever get the time to work out what happened to it. [Update: I was wrong about that. It's been so long since I've looked at the nms project that I had forgotten the URL. The web site is still there in all its early-2000s car-crash web design glory.]
  • The following day I saw a question on Stack Overflow about a “classic mailing script”. And, yes, it was nms Formmail again. This user had moved their web site to a new server and it had stopped working. We never got the error log content that we asked them for, but the user confirmed my suspicion that the new web server had a newer version of Perl – one that was released after CGI.pm was removed. The nms project had (for obvious reasons) made heavy use of the module and its removal from core Perl has rendered the nms programs unusable on cheap servers where the sysadmin has no knowledge of or interest in installing any Perl modules that aren’t part of the standard package. Sadly, this means that Matt Wright’s original versions (that were never updated to use CGI.pm) still work in environments where the nms versions are useless.

None of this should be taken as an argument that the nms project was wrong to use CGI.pm or that the Perl 5 Porters were wrong to remove it from the Perl standard library. I still support both decisions. I just found it a bit jarring to be reminded that while we’re all using PSGI or Mojolicious to write microservices in Perl that serve REST APIs that are developed and deployed in Docker containers, there are still people out there who are struggling to FTP code that was written in 1997 onto low-end shared hosting.

I think this state of affairs has two causes. Firstly (like the first client I mentioned above) some systems were set up when CGI was still in common use – and things haven’t changed since. These people get a sudden shock when they are forced to move to a more modern server for some reason. And then there are people like my Fiverr clients who install Perl CGI programs because that’s what they have always done and they don’t know that there is an alternative approach. Part of the problem there is, presumably, that Perl has meant badly-written CGI programs for a large proportion of the web’s existence and means anyone searching for information on this subject is likely to find pages and pages of advice telling them how to install CGI programs before they discover anything about PSGI or Docker. And I think there might be a solution to that problem (or, at least, a way to nudge the web in the right direction).

Over last weekend I was cataloguing subdomains (I know how to have fun!) and I found a web site that I had forgotten about. I had obviously been contemplating a very similar situation back in 2016.

The site is called Perl Web Advice. The intention was (is?) that it would be a definitive source of good advice about how to develop and deploy web applications written in Perl. I had only made tiny inroads into the task before something else apparently seemed more fun and the project was abandoned.

But there’s the start of a framework for the site. And, this week, I’ve given it a GitHub Actions workflow so it gets republished automatically whenever changes are pushed to the repo. I’ve even set up a Dockerfile to make it easy to use the static site generator that I’ve used for it. So perhaps the idea has merit. Once there’s a bit more useful content there I could see if I can remember any of my SEO knowledge and get it appearing in results where people are looking for advice on this topic.

I would, of course, be happy to consider contributions from other people. What do you think? Would you like to help me save people from the hell of CGI deployments?

The post The present isn’t evenly distributed either appeared first on Perl Hacks.

You might remember that I’ve been taking an interest in GitHub Actions for the last year or so (I even wrote a book on the subject). And at the Perl Conference in Toronto last summer I gave a talk called “GitHub Actions for Perl Development” (here are the slides and the video).

During that talk, I mentioned a project I was working on to produce a set of reusable workflows that would make it easier for anyone to start using GitHub Actions in their Perl development. Although (as I said in the talk) things were moving pretty quickly on the project at the time, once I got back to London, several other things became more important and work on this project pretty much stalled. But over the last couple of weeks, I’ve returned to this project and I’ve finally got some of the workflows into a state where I’ve been using them successfully in my GitHub repos and I think they’re now ready for you to start using them in yours. There are three workflows that I’d like you to try:

  • cpan_test: This runs your test suite and reports on the results
  • cpan_coverage: This calculates the coverage of your test suite and reports the results. It also uploads the results to coveralls.io
  • cpan_perlcritic: This runs perlcritic over your code and reports on the results

And using these workflows in your GitHub repos is as simple as creating a new file in the .github/workflows directory which contains something like this:

name: CI

on:
  push:
    branches: [ master ]
  pull_request:
    branches: [ master ]
  workflow_dispatch:

jobs:
  build:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main

  coverage:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-coverage.yml@main

  perlcritic:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main

There are a couple of parameters you can use to change the behaviour of these workflows. In the Toronto talk, I introduced the idea of a matrix of tests, where you can test against three operating systems (Linux, MacOS and Windows) and a list of Perl versions. By default, the cpan-test workflow uses all three operating systems and all production versions of Perl from 5.24 to 5.38. But you can change that by using the perl_version and os parameters. For example, if you only wanted to test on Ubuntu, using the most recent two versions of Perl, you could use this:

build:
  uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main
  with:
    perl_version: "['5.36', '5.38']"
    os: "['ubuntu']"

Annoyingly, the parameters to a reusable workflow can only be a single scalar value. That’s why we have to use a JSON-encoded string representing an array of values. Maybe this will get better in the future.

The cpan-perlcritic workflow also has a parameter. You can use level to change the level that perlcritic runs at. The default is 5 (the gentlest level) but if you were feeling particularly masochistic, you could do this:

perlcritic:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main
    with:
      level: 1

The workflows are, of course, available on GitHub. It would be great to have some people trying them out and reporting back on their experiences. Raising issues and sending pull requests is very much encouraged.

Please let me know how you get on with them.

The post GitHub Actions for Perl Development first appeared on Perl Hacks.

You might remember that I’ve been taking an interest in GitHub Actions for the last year or so (I even wrote a book on the subject). And at the Perl Conference in Toronto last summer I gave a talk called “GitHub Actions for Perl Development” (here are the slides and the video).

During that talk, I mentioned a project I was working on to produce a set of reusable workflows that would make it easier for anyone to start using GitHub Actions in their Perl development. Although (as I said in the talk) things were moving pretty quickly on the project at the time, once I got back to London, several other things became more important and work on this project pretty much stalled. But over the last couple of weeks, I’ve returned to this project and I’ve finally got some of the workflows into a state where I’ve been using them successfully in my GitHub repos and I think they’re now ready for you to start using them in yours. There are three workflows that I’d like you to try:

  • cpan_test: This runs your test suite and reports on the results
  • cpan_coverage: This calculates the coverage of your test suite and reports the results. It also uploads the results to coveralls.io
  • cpan_perlcritic: This runs perlcritic over your code and reports on the results

And using these workflows in your GitHub repos is as simple as creating a new file in the .github/workflows directory which contains something like this:

name: CI

on:
  push:
    branches: [master]
  pull_request:
    branches: [master]
  workflow_dispatch:

jobs:
  build:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main

  coverage:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-coverage.yml@main

  perlcritic:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main

There are a couple of parameters you can use to change the behaviour of these workflows. In the Toronto talk, I introduced the idea of a matrix of tests, where you can test against three operating systems (Linux, MacOS and Windows) and a list of Perl versions. By default, the cpan-test workflow uses all three operating systems and all production versions of Perl from 5.24 to 5.38. But you can change that by using the perl_version and os parameters. For example, if you only wanted to test on Ubuntu, using the most recent two versions of Perl, you could use this:

build:
  uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main
  with:
    perl_version: "['5.36', '5.38']"
    os: "['ubuntu']"

Annoyingly, the parameters to a reusable workflow can only be a single scalar value. That’s why we have to use a JSON-encoded string representing an array of values. Maybe this will get better in the future.

The cpan-perlcritic workflow also has a parameter. You can use level to change the level that perlcritic runs at. The default is 5 (the gentlest level) but if you were feeling particularly masochistic, you could do this:

perlcritic:
    uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main
    with:
      level: 1

The workflows are, of course, available on GitHub. It would be great to have some people trying them out and reporting back on their experiences. Raising issues and sending pull requests is very much encouraged.

Please let me know how you get on with them.

The post GitHub Actions for Perl Development appeared first on Perl Hacks.

I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022–12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!

I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year — in, as always, chronological order.

  • John Grant (supported by The Faultress) at St. James’s Church
    John Grant has become one of those artists I try to see whenever they pass through London. And this was a particularly special night as he was playing an acoustic set in one of the most atmospheric venues in London. The evening was only slightly marred by the fact I arrived too late to get a decent seat and ended up not being able to see anything.
  • Hannah Peel at Kings Place
    Hannah Peel was the artist in residence at Kings Place for a few months during the year and played three gigs during that time. This was the first of them — where she played her recent album, Fir Wave, in its entirety. A very laid-back and thoroughly enjoyable evening.
  • Orbital at the Eventim Apollo
    I’ve been meaning to get around to seeing Orbital for many years. This show was originally planned to be at the Brixton Academy but as that venue is currently closed, it was relocated to Hammersmith. To be honest, this evening was slightly hampered by the fact I don’t know as much of their work as I thought I did and it was all a bit samey. I ended up leaving before the encore.
  • Duran Duran (supported by Jake Shears) at the O2 Arena
    Continuing my quest to see all of the bands I was listening to in the 80s (and, simultaneously, ticking off the one visit to the O2 that I allow myself each year). I really enjoyed the nostalgia of seeing Duran Duran but, to be honest, I think I enjoyed Jake Shears more — and it was the Scissor Sisters I was listening to on the way home.
  • Hannah Peel and Beibei Wang at Kings Place
    Even in a year where I only see a few gigs, I still manage to see artists more than once. This was the second of Hannah Peel’s artist-in-residence shows. She appeared with Chinese percussionist Beibei Wang in a performance that was completely spontaneous and unrehearsed. Honestly, some parts were more successful than others, but it was certainly an interesting experience.
  • Songs from Summerisle at the Barbican Hall
    The Wicker Man is one of my favourite films, so I jumped at the chance to see the songs from the soundtrack performed live. But unfortunately, the evening was a massive disappointment. The band sounded like they had met just before the show and, while they all obviously knew the songs, they hadn’t rehearsed them together. Maybe they were going for a rustic feel — but, to me, it just sounded unprofessional.
  • Belle and Sebastian at the Roundhouse
    Another act that I try to see as often as possible. I know some people see Belle and Sebastian as the most Guardian-reader band ever — but I love them. This show saw them on top form.
  • Jon Anderson and the Paul Green Rock Academy at the Shepherds Bush Empire
    I’ve seen Yes play live a few times in the last ten years or so and, to be honest, it can sometimes be a bit over-serious and dull. In this show, Jon Anderson sang a load of old Yes songs with a group of teenagers from the Paul Green Rock Academy (the school that School of Rock was based on) and honestly, the teenagers brought such a feeling of fun to the occasion that it was probably the best Yes-related show that I’ve seen.
  • John Grant and Richard Hawley at the Barbican Hall
    Another repeated act — my second time seeing John Grant in a year. This was something different as he was playing a selection of Patsy Cline songs. I don’t listen to Patsy Cline much, but I knew a few more of the songs than I expected to. This was a bit lower-key than I was expecting.
  • Peter Hook and the Light at the Eventim Apollo
    I’ve been planning to see Peter Hook and the Light for a couple of years. There was a show I had tickets for in 2020, but it was postponed because of COVID and when it was rescheduled, I was unable to go, so I cancelled my ticket and got a refund. So I was pleased to get another chance. And this show had them playing both of the Substance albums (Joy Division and New Order). I know New Order still play some Joy Division songs in their sets, but this is probably the best chance I’ll have to see some deep Joy Division cuts played live. I really enjoyed this show.
  • Heaven 17 at the Shepherds Bush Empire
    It seems I see Heaven 17 live most years and they usually appear on my “best of” lists. This show was celebrating the fortieth anniversary of their album The Luxury Gap — so that got played in full, alongside many other Heaven 17 and Human League songs. A thoroughly enjoyable night.
  • The Imagined Village and Afro-Celt Sound System at the Roundhouse
    I’ve seen both The Imagined Village and the Afro-Celts live once before. And they were two of the best gigs I’ve ever seen. I pretty much assumed that the death of Simon Emmerson (who was an integral part of both bands) earlier in 2023 would mean that both bands would stop performing. But this show was a tribute to Emmerson and the bands both reformed to celebrate his work. This was probably my favourite gig of the year. That’s The Imagined Village (featuring two Carthys, dour Coppers and Billy Bragg) in the photo at the top of this post.

So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year — They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.

Originally published at https://blog.dave.org.uk on December 31, 2023.

I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!

I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.

  • John Grant (supported by The Faultress) at St. James’s Church
    John Grant has become one of those artists I try to see whenever they pass through London. And this was a particularly special night as he was playing an acoustic set in one of the most atmospheric venues in London. The evening was only slightly marred by the fact I arrived too late to get a decent seat and ended up not being able to see anything.
  • Hannah Peel at Kings Place
    Hannah Peel was the artist in residence at Kings Place for a few months during the year and played three gigs during that time. This was the first of them – where she played her recent album, Fir Wave, in its entirety. A very laid-back and thoroughly enjoyable evening.
  • Orbital at the Eventim Apollo
    I’ve been meaning to get around to seeing Orbital for many years. This show was originally planned to be at the Brixton Academy but as that venue is currently closed, it was relocated to Hammersmith. To be honest, this evening was slightly hampered by the fact I don’t know as much of their work as I thought I did and it was all a bit samey. I ended up leaving before the encore.
  • Duran Duran (supported by Jake Shears) at the O2 Arena
    Continuing my quest to see all of the bands I was listening to in the 80s (and, simultaneously, ticking off the one visit to the O2 that I allow myself each year). I really enjoyed the nostalgia of seeing Duran Duran but, to be honest, I think I enjoyed Jake Shears more – and it was the Scissor Sisters I was listening to on the way home.
  • Hannah Peel and Beibei Wang at Kings Place
    Even in a year where I only see a few gigs, I still manage to see artists more than once. This was the second of Hannah Peel’s artist-in-residence shows. She appeared with Chinese percussionist Beibei Wang in a performance that was completely spontaneous and unrehearsed. Honestly, some parts were more successful than others, but it was certainly an interesting experience.
  • Songs from Summerisle at the Barbican Hall
    The Wicker Man is one of my favourite films, so I jumped at the chance to see the songs from the soundtrack performed live. But unfortunately, the evening was a massive disappointment. The band sounded like they had met just before the show and, while they all obviously knew the songs, they hadn’t rehearsed them together. Maybe they were going for a rustic feel – but, to me, it just sounded unprofessional.
  • Belle and Sebastian at the Roundhouse
    Another act that I try to see as often as possible. I know some people see Belle and Sebastian as the most Guardian-reader band ever – but I love them. This show saw them on top form.
  • Jon Anderson and the Paul Green Rock Academy at the Shepherds Bush Empire
    I’ve seen Yes play live a few times in the last ten years or so and, to be honest, it can sometimes be a bit over-serious and dull. In this show, Jon Anderson sang a load of old Yes songs with a group of teenagers from the Paul Green Rock Academy (the school that School of Rock was based on) and honestly, the teenagers brought such a feeling of fun to the occasion that it was probably the best Yes-related show that I’ve seen.
  • John Grant and Richard Hawley at the Barbican Hall
    Another repeated act – my second time seeing John Grant in a year. This was something different as he was playing a selection of Patsy Cline songs. I don’t listen to Patsy Cline much, but I knew a few more of the songs than I expected to. This was a bit lower-key than I was expecting.
  • Peter Hook and the Light at the Eventim Apollo
    I’ve been planning to see Peter Hook and the Light for a couple of years. There was a show I had tickets for in 2020, but it was postponed because of COVID and when it was rescheduled, I was unable to go, so I cancelled my ticket and got a refund. So I was pleased to get another chance. And this show had them playing both of the Substance albums (Joy Division and New Order). I know New Order still play some Joy Division songs in their sets, but this is probably the best chance I’ll have to see some deep Joy Division cuts played live. I really enjoyed this show.
  • Heaven 17 at the Shepherds Bush Empire
    It seems I see Heaven 17 live most years and they usually appear on my “best of” lists. This show was celebrating the fortieth anniversary of their album The Luxury Gap – so that got played in full, alongside many other Heaven 17 and Human League songs. A thoroughly enjoyable night.
  • The Imagined Village and Afro-Celt Sound System at the Roundhouse
    I’ve seen both The Imagined Village and the Afro-Celts live once before. And they were two of the best gigs I’ve ever seen. I pretty much assumed that the death of Simon Emmerson (who was an integral part of both bands) earlier in 2023 would mean that both bands would stop performing. But this show was a tribute to Emmerson and the bands both reformed to celebrate his work. This was probably my favourite gig of the year. That’s The Imagined Village (featuring two Carthys, dour Coppers and Billy Bragg) in the photo at the top of this post.

So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.

The post 2023 in Gigs appeared first on Davblog.

Seventy Years of Change

Her Majesty has, of course, seen changes in many areas of society in the seventy years of her reign. But here, we’re most interested in the line of succession. So we thought it would be interesting to look at the line of succession on the day that she took the throne and see what had happened to the people who were at the top of the line of succession on that day. It’s a very different list to today’s.

  1. The Prince Charles, Duke of Cornwall
    We start with the one person who is in exactly the same place as he was seventy years ago. Prince Charles was three years old and hadn’t yet been made Prince of Wales.
  2. The Princess Anne
    Princess Anne has fallen a long way in seventy years. The birth of younger brothers (back in the days when sex mattered in the line of succession) and those brothers having families of their own mean that she is now down at number 17.
  3. Princess Margaret
    We’ve run out of the Queen’s descendants after only two places (today, they fill the top 24 places in the line) so we move to her sister. Princess Margaret had fallen to 11th place before her death in 2002.
  4. Prince Henry, Duke of Gloucester
    We’ve now run out of descendants of George VI, so we need to look at his brothers. This is the father of the current duke. He fell to 8th place before dying in 1974.
  5. Prince William of Gloucester
    The Duke of Gloucester’s eldest son had fallen to position 9 before sadly dying before his father in 1972.
  6. Prince Richard of Gloucester
    As his eldest son predeceased their father, it was Prince Richard who became Duke of Gloucester when the first duke died in 1974. He is currently in 30th place.
  7. Prince Edward, Duke of Kent
    The first Duke of Kent had died ten years earlier, so it was his son, Prince Edward, who held the title, at the age of 16, who was duke in 1952, He fell out of the top 30 in 2012.
  8. Prince Michael of Kent
    Prince Michael had fallen to 16th place before his marriage to a Catholic, in 1978, excluded him from the line of succession. He was reinstated in 2015 (because the Succession to the Crown Act meant that marriage to a Catholic was no longer a reason for exclusion) but he reappeared outside of the top 30.
  9. Princess Alexandra of Kent
    Princess Alexandra had dropped down the list pretty consistently throughout her life. From 1999 she popped in and out of the top 30 a few times. but she left it for the last time in 2003.
  10. Princess Mary, Princess Royal
    The youngest child and only daughter of George V, Princess Mary had called to 17th in line before she died in 1965.
  11. George Lascelles, The 7th Earl of Harewood
    Fell out of the top 30 in 1994 before dying in 2011.
  12. David Lascelles, Viscount Lascelles
    Fell out of the top 30 in 1993.
  13. Gerald Lascelles
    Fell out of the top 30 in 1982 and died in 1998.
  14. Princess Arthur of Connaught, Duchess of Fife
    Fell to 17th before dying in 1959
  15. James Carnegie, 3rd Duke of Fife
    Fell out of the top 30 in 1981 and died in 2015
  16. Olaf V, King of Norway
    A bit of a leap as we find the royal family of Norway surprisingly close to the top of the list. King Olaf was a grandson of Edward VII (through Edward’s daughter Maud). He fell out of the top 30 in 1979 and died in 1991.
  17. Prince Harald of Norway
    Prince Harald became king of Norway in 1991. He fell out of the top 30 of the British line of succession in 1977.
  18. Princess Ragnhild of Norway
    Princess Ragnhild fell out of the top 30 in 1973 and died in 2012.
  19. Princess Astrid of Norway
    Princess Astrid fell out of the top 30 in 1964.
  20. Carol II of Romania
    The next-closest royal family to ours is the Romanians. Carol II was a great-grandson of Victoria. The death of George VI moved him up a place from 21 to 20 and he remained there until his death the following year. Carol hadn’t actually been King of Romania since he was forced to abdicate in 1940.
  21. Carol Lambrino
    The question of Carol Lambino’s legitimacy is a question of some dispute — so he may not have been on the line of succession at all. But, if he was, he fell out of the top 30 in 1963 and died in 2006.
  22. Paul-Philippe Hohenzollern
    As son of the possibly-illegitimate Carol Lambino, Paul-Phillippe’s place in the line of succession is also in question. But, anyway, he fell out of the top 30 in 1962.
  23. Prince Nicholas of Romania
    Prince Nicholas fell out of the top 30 in 1961 and died in 1978.
  24. Elisabeth of Romania
    Fell to number 27 before dying in 1956.
  25. Maria of Yugoslavia
    Fell to position 30 before dying in 1961.
  26. Peter II of Yugoslavia
    Peter was no longer King of Yugoslavia, having been deposed in 1945. He fell out of the top 30 in 1961 and died in 1970.
  27. Prince Tomislav of Yugoslavia
    Fell out of the top 30 in 1960 and died in 2000.
  28. Prince Andrew of Yugoslavia
    Fell out of the top 30 in 1959 and died in 1990.
  29. Princess Ileana of Romania
    Fell out of the top 30 in 1954 and died in 1991.
  30. Archduke Stefan of Austria
    Fell out of the top 30 in 1953 and died in 1998.

I think that’s an interesting list for a few reasons:

  • The fact that we’ve gone from two of the Queen’s descendants to twenty-four of them on the list (but even that’s not as big a difference as happened during Victoria’s reign).
  • Only ten of the people on the list are still living.
  • There’s a large number of foreign royalty on the list — basically, the second half of the list is taken up by members of the royal families of Norway, Romania and Yugoslavia. This is obviously because of the way that royal families inter-married up until early in the 20th century. We see far less of that now.

So what do you think? Was the 1952 list a surprise to you? Did you expect it to be as different as it is from the current list?

Originally published at https://blog.lineofsuccession.co.uk on February 7, 2022.


Seventy Years of Change — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.

Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…

Ratio: The Simple Codes Behind the Craft of Everyday Cooking (1) (Ruhlman's Ratios)
author: Michael Ruhlman
name: David
average rating: 4.06
book published: 2009
rating: 0
read at:
date added: 2023/02/06
shelves: currently-reading
review:

Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.

Here are my ten favourite gigs of the year. As always, they’re in chronological order.

  • Pale Waves at the Roundhouse
    I’ve seen Pale Waves a few times now and I think they’ve firmly established their place on my “see them whenever they tour near me” list. This show was every bit as good as I’ve ever seen them.
  • Orchestral Manoeuvres in the Dark at the Royal Albert Hall
    Another band I see whenever I can. This was a slightly different set where the first half was called “Atmospheric” and concentrated on some deeper cuts from their back catalogue and the second half included all the hits.
  • Chvrches at Brixton Academy
    In 2020, I moved to a flat that’s about fifteen minutes’ walk from Brixton Academy. But I had to wait about eighteen months in order to take advantage of that fact. The last couple of times I’ve seen Chvrches were at Alexandra Palace, so it was nice to see them at a smaller venue again. This show featured a not-entirely unexpected guest appearance from Robert Smith.
  • Sunflower Bean at Electric Ballroom
    Another act who I see live as often as I can. And this was a great venue to see them in.
  • Pet Shop Boys at the O2 Arena
    There’s always one show a year that draws me to the soulless barn that is the O2 Arena. Every time I go there, I vow it’ll be the last time – but something always pulls me back. This year it was the chance to see a band I loved in the 80s and have never seen live. This was a fabulous greatest hits show that had been postponed from 2020.
  • Lorde at the Roundhouse
    A new Lorde album means another Lorde tour. And, like Chvrches, she swapped the huge expanse of Alexandra Palace for multiple nights at a smaller venue. This was a very theatrical show that matched the vibe of the Solar Power album really well.
  • LCD Soundsystem at Brixton Academy
    Another show at Brixton Academy. For some reason, I didn’t know about this show until I walked past the venue a few days before and saw the “sold out” signs. But a day or so later, I got an email from the venue offering tickets. So I snapped one up and had an amazing evening. It was the first time I’d seen them, but I strongly suspect it won’t be the last. That’s them in the photo at the top of this post.
  • Roxy Music at the O2 Arena
    Some years there are two shows that force me to the O2 Arena. And this was one of those years. I’ve been a fan of Roxy Music since the 70s but I’ve never seen them live. Honestly, it would have been better to have seen them in the 70s or 80s, but it was still a great show.
  • Beabadoobee at Brixton Academy
    Sometimes you go to see an artist because of one song and it just works out. This was one of those nights. In fact, it turns out I didn’t actually know “Coffee For Your Head” very well – I just knew the sample that was used in another artist’s record. But this was a great night and I hope to see her again very soon.
  • Sugababes at Eventim Apollo
    Another night of fabulous nostalgia. The Eventim Apollo seems to have become my venue of choice to see re-formed girl groups from the 80s and 90s – having seen Bananarama, All Saints and now The Sugababes there in recent years. They have a surprising number of hits (far more than I remembered before the show) and they put on a great show.

Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.

A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.

But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.

Did you see any great shows this year? Tell us about them in the comments.

The post 2022 in Gigs appeared first on Davblog.

Dave Cross posted a photo:

Goodbye Vivienne

via Instagram instagr.am/p/CmyT_MSNR3-/

Dave Cross posted a photo:

Low sun on Clapham Common this morning

via Instagram instagr.am/p/Cmv4y1eNiPn/

Dave Cross posted a photo:

There are about a dozen parakeets in this tree. I can hear them and (occasionally) see them

via Instagram instagr.am/p/Cmv4rUAta58/

Dave Cross posted a photo:

Sunrise on Clapham Common

via Instagram instagr.am/p/Cmq759NtKtE/

Dave Cross posted a photo:

Brixton Academy

via Instagram instagr.am/p/CmOfgfLtwL_/

Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.

First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacking originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.

Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.

Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.

Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.

In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.

[This post was generated using ChatGPT]

Originally published at https://blog.dave.org.uk on December 11, 2022.

Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.

First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.

Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.

Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.

Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.

In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.

[This post was generated using ChatGPT]

The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.

‘Okay Google. Where is Antarctica?”

Children can now get answers to all their questions using smart speakers and digital voice assistants.

A few years ago, children would run to their parents or grandparents to answer their questions. But with the ascendence of voice assistants to the mainstream in recent years, many children rely more on technology than humans.

Is this a good idea?

How does it impact the children?

When children interact with people, it helps them be more thoughtful, creative, and imaginative.

When they use artificial intelligence instead, several issues come into the foreground. These include access to age-inappropriate content and increasing the possibility of being rude or unpleasant, affecting how they treat others.

As mentioned, technology has both pros and cons. There are benefits to children using these devices, including improving diction, communication, social skills, and gaining information without bothering their parents.

Many families find that smart speakers like Amazon Echo and Google Home are useful. They use them for several functions, ranging from answering questions to setting the thermostat. Research shows that up to nine out of ten children between the ages of four and eleven in the US are regularly using smart speakers — often without parental guidance and control. So, what is the best approach for a parent to take?

Children up to seven years old can find it challenging to differentiate between humans and devices, and this can lead to one of the biggest dangers. If the device fulfils their requests through rude behaviour, children may behave similarly to other humans.

Do Parents Think Smart Devices Should Encourage Polite Conversations?

Most parents consider it essential that smart devices should encourage polite conversations as a part of nurturing good habits in children. The Campaign for a Commercial-Free Childhood or CCFA is a US coalition of concerned parents, healthcare professionals, and educators. Recently, CCFA protested against Amazon Echo Dot Kids Edition, stating that it may affect children’s wellbeing. Because of this, they requested parents avoid buying Amazon Echo.

However, in reality, these smart devices have improved a lot and focus on encouraging polite conversations with children. It is all about how parents use and present these devices to their children, as these factors can influence them a lot.

But in simple terms, parents wish these devices to encourage politeness in their children. At the same time, they want their kids to understand the difference between artificial intelligence and humans while using these technological innovations.

Do Parents Think Their Children are Less Polite While Using Smart Speakers?

Many parents have seen their children behave rudely to smart speakers. Several parents have expressed their concerns through social media, blog posts and forums like Mumsnet. They fear these behaviours can impact their kids when they grow up.

A report published in Child Wise reached the conclusion that children who behave rudely to smart devices might be aggressive while they grow up, especially while dealing with other humans. It is, therefore, preferable if children use polite words while interacting with both humans and smart devices.

What Approaches Have Been Taken By Tech Companies to Address the Problem?

With interventions and rising concerns addressed by parents and health professionals, some tech companies have brought changes to virtual assistants and smart speakers.

The parental control features available in Alexa focus on training kids to be more polite. Amazon brands it as Magic Word, where the focus is on bringing positive enforcement. However, there is no penalty if children don’t speak politely. Available on Amazon Echo, this tool has added features like setting bedtimes, switching off devices, and blocking songs with explicit lyrics.

When it comes to Google Home, it has brought in a new feature called Pretty Please. Here, Google will perform an action only when children use, please. For instance, “Okay, Google. Please set the timer for 15 minutes.”

You can enable this feature through the Google Family Link, where you can find the settings for Home and Assistant. You can set these new standards for devices of your preference. Also, once you use it and figure things out, there will be no more issues in setting it up again.

These tools and their approaches are highly beneficial for kids and parents. As of now, these devices only offer basic features and limited replies. But with time, there could be technological changes that encourage children to have much more efficient and polite interactions.

George and the Smart Home

It was thinking about issues like this which led me to write my first children’s book — George and the Smart Home. In the book, George is a young boy who has problems getting the smart speakers in his house to do what he wants until he learns to be polite to them.

It is available now, as a paperback and a Kindle book, from Amazon.

Buy it from: AU / BR / CA / DE / ES / FR / IN / IT / JP / MX / NL / UK / US

The post Should Children be Polite While Using Smart Speakers? appeared first on Davblog.

S.

S.
author: J.J. Abrams
name: David
average rating: 3.86
book published: 2013
rating: 0
read at:
date added: 2022/01/16
shelves: currently-reading
review:

The Introvert Entrepreneur
author: Beth Buelow
name: David
average rating: 3.43
book published: 2015
rating: 0
read at:
date added: 2020/01/27
shelves: currently-reading
review:


Some thoughts on ways to measure the quality of Perl code (and, hence, get a basis for improving it)

How (and why) I spent 90 minutes writing a Twitterbot that tweeted the Apollo 11 mission timeline (shifted by 50 years)

A talk from the European Perl Conference 2019 (but not about Perl)
Prawn Cocktail Years
author: Lindsey Bareham
name: David
average rating: 4.50
book published: 1999
rating: 0
read at:
date added: 2019/07/29
shelves: currently-reading
review:

Write. Publish. Repeat. (The No-Luck-Required Guide to Self-Publishing Success)
author: Sean Platt
name: David
average rating: 4.28
book published: 2013
rating: 0
read at:
date added: 2019/06/24
shelves: currently-reading
review:


The slides from a half-day workshop on career development for programmers that I ran at The Perl Conference in Glasgow

A (not entirely serious) talk that I gave at the London Perl Mongers technical meeting in March 2018. It talks about how and why I build a web site listing the line of succession to the British throne back through history.
Dave Cross / Thursday 25 April 2024 12:02