@davorg davorg commented on plack/Plack#693 · June 6, 2023 14:20
@davorg davorg closed plack/Plack#693 · June 6, 2023 14:20
Use methods for HTML/CSS so it's easier to subclass. #693

I wanted to change the HTML/CSS in a subclass of this class, but using lexical variables makes it too hard. I worked around it, but a change like t…

+46 -25
6 comments
@davorg davorg opened plack/Plack#695 · June 6, 2023 14:20
Accept an option to define a callback to render the index page. #695

Is this the kind of thing you would prefer? If so, don't merge this until I add docs and tests.

+19 -6
@davorg davorg created a branch in davorg/Plack · June 6, 2023 14:19

Updated Jun 6

@davorg davorg opened plack/Plack#694 · June 6, 2023 13:22
Add support for a dir_index option #694

Allows Plack::App::Directory to support index files (like index.html)

+27 -0

[This post might sound like I’m angry at people making it hard to make progress on some things. That’s not the case at all. I realise completely that people have limited time and they get to choose how they spend it. If people are too busy elsewhere or have moved on to other projects then that’s just how it is and we need to deal with that the best we can.]

Back in December 2020, I wrote a blog post about how I wanted to fix a long-standing problem with App::HTTPThis. I’m happy to report that two and a half years later, the problem has been fixed.

To summarise my previous blog post:

  • App::HTTPThis allows you to run a tiny web server that will serve the contents of a directory over HTTP. But, unfortunately, it doesn’t support default pages like “index.html”.
  • App::HTTPThis uses Plack::App::Directory (which is part of the Plack distribution) to do the work – so it’s that which actually doesn’t support “index.html”.
  • People suggested Plack::Middleware::DirIndex, but that also didn’t quite do the right thing.
  • I submitted a pull request on Plack::App::Directory to add support for “index.html”.
  • I wrote a new module called Plack::App::DirectoryIndex which was like Plack::App::Directory but with added support for “index.html”.
  • I submitted a pull request on App::HTTPThis to use my module in place of Plack::App::Directory.

Now read on…

Both of my pull requests went unactioned for months. In the end, I decided to approach the Perl modules list to ask if I could get co-maintainer permission on App::HTTPThis (it looked like the original maintainer had lost interest – there hadn’t been a release since 2010). When I heard nothing back, I put the project to one side only occasionally returning to add a new comment on my two pull requests.

Then last month I decided I’d have another go at getting co-maintainer permissions. This time it worked and, earlier this week, I got an email from Neil Bowers saying that the previous maintainer had agreed to give me permission and that I could now upload the module to CPAN.

At that point, I realised that the release mechanism for the module was based on Dist::Zilla and also that fashions in the Dist::Zilla world had changed since App::HTTPThis had last been released. This meant that many of the plugins used had been deprecated and I had to do a bit of work to even release the module (which led to a small rant on Reddit).

But I managed to release version 0.003 to CPAN. Only to realise very soon afterwards that my Dist::Zilla-wrangling had missed an important fix. I fixed that and released version 0.004.

I then got an email from PAUSE telling me that I didn’t have permission to release the module. It seems this was a known PAUSE bug and Neil was able to apply a workaround for me. I was able to release version 0.004.

All of which means I now have a version of App::HTTPThis (and its included program, http_this) which supports default pages. You just have to type, for example:

$ http_this --autoindex .

And the current directory will be served over HTTP. Also (and this is the important bit!) if you have a file called “index.html” then that will be used instead of the server displaying a directory listing. It’s a tiny improvement, but one that will be very useful to me. And well worth the two and a half years I’ve invested in getting it released.

So why do I say that the mission is only “almost” completed? Well, there’s still that outstanding pull request on Plack::App::Directory. If that ever gets applied, I’ll remove Plack::App::DirectoryIndex from App::HTTPThis (and mark it as deprecated on CPAN).

This is, of course, a supremely unimportant fix in the grand scheme of things. But I think it illustrates an important issue that the Perl community should be thinking about. The community is shrinking. Or, at least, the part of the community that supports CPAN modules and runs our important infrastructure is shrinking. CPAN is full of modules that are now unsupported. I’ve lost count of the number of bugs I’ve reported or patches I’ve supplied that have been ignored because the module author is no longer interested. In some cases, I’ve taken over the module myself, but that’s not a scalable solution. Honestly, I don’t know what it is. But I do think that relying on CPAN modules has got harder over the last few years. And it’s not going to get easier any time soon.

The post Mission (Almost) Accomplished appeared first on Perl Hacks.

[This post might sound like I’m angry at people making it hard to make progress on some things. That’s not the case at all. I realise completely that people have limited time and they get to choose how they spend it. If people are too busy elsewhere or have moved on to other projects then that’s just how it is and we need to deal with that the best we can.]

Back in December 2020, I wrote a blog post about how I wanted to fix a long-standing problem with App::HTTPThis. I’m happy to report that two and a half years later, the problem has been fixed.

To summarise my previous blog post:

  • App::HTTPThis allows you to run a tiny web server that will serve the contents of a directory over HTTP. But, unfortunately, it doesn’t support default pages like “index.html”.
  • App::HTTPThis uses Plack::App::Directory (which is part of the Plack distribution) to do the work – so it’s that which actually doesn’t support “index.html”.
  • People suggested Plack::Middleware::DirIndex, but that also didn’t quite do the right thing.
  • I submitted a pull request on Plack::App::Directory to add support for “index.html”.
  • I wrote a new module called Plack::App::DirectoryIndex which was like Plack::App::Directory but with added support for “index.html”.
  • I submitted a pull request on App::HTTPThis to use my module in place of Plack::App::Directory.

Now read on…

Both of my pull requests went unactioned for months. In the end, I decided to approach the Perl modules list to ask if I could get co-maintainer permission on App::HTTPThis (it looked like the original maintainer had lost interest – there hadn’t been a release since 2010). When I heard nothing back, I put the project to one side only occasionally returning to add a new comment on my two pull requests.

Then last month I decided I’d have another go at getting co-maintainer permissions. This time it worked and, earlier this week, I got an email from Neil Bowers saying that the previous maintainer had agreed to give me permission and that I could now upload the module to CPAN.

At that point, I realised that the release mechanism for the module was based on Dist::Zilla and also that fashions in the Dist::Zilla world had changed since App::HTTPThis had last been released. This meant that many of the plugins used had been deprecated and I had to do a bit of work to even release the module (which led to a small rant on Reddit).

But I managed to release version 0.003 to CPAN. Only to realise very soon afterwards that my Dist::Zilla-wrangling had missed an important fix. I fixed that and released version 0.004.

I then got an email from PAUSE telling me that I didn’t have permission to release the module. It seems this was a known PAUSE bug and Neil was able to apply a workaround for me. I was able to release version 0.004.

All of which means I now have a version of App::HTTPThis (and its included program, http_this) which supports default pages. You just have to type, for example:

$ http_this --autoindex .

And the current directory will be served over HTTP. Also (and this is the important bit!) if you have a file called “index.html” then that will be used instead of the server displaying a directory listing. It’s a tiny improvement, but one that will be very useful to me. And well worth the two and a half years I’ve invested in getting it released.

So why do I say that the mission is only “almost” completed? Well, there’s still that outstanding pull request on Plack::App::Directory. If that ever gets applied, I’ll remove Plack::App::DirectoryIndex from App::HTTPThis (and mark it as deprecated on CPAN).

This is, of course, a supremely unimportant fix in the grand scheme of things. But I think it illustrates an important issue that the Perl community should be thinking about. The community is shrinking. Or, at least, the part of the community that supports CPAN modules and runs our important infrastructure is shrinking. CPAN is full of modules that are now unsupported. I’ve lost count of the number of bugs I’ve reported or patches I’ve supplied that have been ignored because the module author is no longer interested. In some cases, I’ve taken over the module myself, but that’s not a scalable solution. Honestly, I don’t know what it is. But I do think that relying on CPAN modules has got harder over the last few years. And it’s not going to get easier any time soon.

The post Mission (Almost) Accomplished appeared first on Perl Hacks.

Seventy Years of Change

Her Majesty has, of course, seen changes in many areas of society in the seventy years of her reign. But here, we’re most interested in the line of succession. So we thought it would be interesting to look at the line of succession on the day that she took the throne and see what had happened to the people who were at the top of the line of succession on that day. It’s a very different list to today’s.

  1. The Prince Charles, Duke of Cornwall
    We start with the one person who is in exactly the same place as he was seventy years ago. Prince Charles was three years old and hadn’t yet been made Prince of Wales.
  2. The Princess Anne
    Princess Anne has fallen a long way in seventy years. The birth of younger brothers (back in the days when sex mattered in the line of succession) and those brothers having families of their own mean that she is now down at number 17.
  3. Princess Margaret
    We’ve run out of the Queen’s descendants after only two places (today, they fill the top 24 places in the line) so we move to her sister. Princess Margaret had fallen to 11th place before her death in 2002.
  4. Prince Henry, Duke of Gloucester
    We’ve now run out of descendants of George VI, so we need to look at his brothers. This is the father of the current duke. He fell to 8th place before dying in 1974.
  5. Prince William of Gloucester
    The Duke of Gloucester’s eldest son had fallen to position 9 before sadly dying before his father in 1972.
  6. Prince Richard of Gloucester
    As his eldest son predeceased their father, it was Prince Richard who became Duke of Gloucester when the first duke died in 1974. He is currently in 30th place.
  7. Prince Edward, Duke of Kent
    The first Duke of Kent had died ten years earlier, so it was his son, Prince Edward, who held the title, at the age of 16, who was duke in 1952, He fell out of the top 30 in 2012.
  8. Prince Michael of Kent
    Prince Michael had fallen to 16th place before his marriage to a Catholic, in 1978, excluded him from the line of succession. He was reinstated in 2015 (because the Succession to the Crown Act meant that marriage to a Catholic was no longer a reason for exclusion) but he reappeared outside of the top 30.
  9. Princess Alexandra of Kent
    Princess Alexandra had dropped down the list pretty consistently throughout her life. From 1999 she popped in and out of the top 30 a few times. but she left it for the last time in 2003.
  10. Princess Mary, Princess Royal
    The youngest child and only daughter of George V, Princess Mary had called to 17th in line before she died in 1965.
  11. George Lascelles, The 7th Earl of Harewood
    Fell out of the top 30 in 1994 before dying in 2011.
  12. David Lascelles, Viscount Lascelles
    Fell out of the top 30 in 1993.
  13. Gerald Lascelles
    Fell out of the top 30 in 1982 and died in 1998.
  14. Princess Arthur of Connaught, Duchess of Fife
    Fell to 17th before dying in 1959
  15. James Carnegie, 3rd Duke of Fife
    Fell out of the top 30 in 1981 and died in 2015
  16. Olaf V, King of Norway
    A bit of a leap as we find the royal family of Norway surprisingly close to the top of the list. King Olaf was a grandson of Edward VII (through Edward’s daughter Maud). He fell out of the top 30 in 1979 and died in 1991.
  17. Prince Harald of Norway
    Prince Harald became king of Norway in 1991. He fell out of the top 30 of the British line of succession in 1977.
  18. Princess Ragnhild of Norway
    Princess Ragnhild fell out of the top 30 in 1973 and died in 2012.
  19. Princess Astrid of Norway
    Princess Astrid fell out of the top 30 in 1964.
  20. Carol II of Romania
    The next-closest royal family to ours is the Romanians. Carol II was a great-grandson of Victoria. The death of George VI moved him up a place from 21 to 20 and he remained there until his death the following year. Carol hadn’t actually been King of Romania since he was forced to abdicate in 1940.
  21. Carol Lambrino
    The question of Carol Lambino’s legitimacy is a question of some dispute — so he may not have been on the line of succession at all. But, if he was, he fell out of the top 30 in 1963 and died in 2006.
  22. Paul-Philippe Hohenzollern
    As son of the possibly-illegitimate Carol Lambino, Paul-Phillippe’s place in the line of succession is also in question. But, anyway, he fell out of the top 30 in 1962.
  23. Prince Nicholas of Romania
    Prince Nicholas fell out of the top 30 in 1961 and died in 1978.
  24. Elisabeth of Romania
    Fell to number 27 before dying in 1956.
  25. Maria of Yugoslavia
    Fell to position 30 before dying in 1961.
  26. Peter II of Yugoslavia
    Peter was no longer King of Yugoslavia, having been deposed in 1945. He fell out of the top 30 in 1961 and died in 1970.
  27. Prince Tomislav of Yugoslavia
    Fell out of the top 30 in 1960 and died in 2000.
  28. Prince Andrew of Yugoslavia
    Fell out of the top 30 in 1959 and died in 1990.
  29. Princess Ileana of Romania
    Fell out of the top 30 in 1954 and died in 1991.
  30. Archduke Stefan of Austria
    Fell out of the top 30 in 1953 and died in 1998.

I think that’s an interesting list for a few reasons:

  • The fact that we’ve gone from two of the Queen’s descendants to twenty-four of them on the list (but even that’s not as big a difference as happened during Victoria’s reign).
  • Only ten of the people on the list are still living.
  • There’s a large number of foreign royalty on the list — basically, the second half of the list is taken up by members of the royal families of Norway, Romania and Yugoslavia. This is obviously because of the way that royal families inter-married up until early in the 20th century. We see far less of that now.

So what do you think? Was the 1952 list a surprise to you? Did you expect it to be as different as it is from the current list?

Originally published at https://blog.lineofsuccession.co.uk on February 7, 2022.


Seventy Years of Change — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.

Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…

Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.

Douglas Adams – The Hitchhiker’s Guide to the Galaxy

I don’t still wear a digital watch, but I do like other things that are almost as unhip. In particular, I pine for the time about twenty years ago when web feeds looked like they were about to take over the world. Everyone had their favourite feed reader (I still miss Google Reader) and pretty much any useful web site would produce one or more web feeds that you could subscribe to and follow through your feed reader. For a few years, it was almost unthinkable to produce a web site without publishing a feed which included the changes to the site’s content.

Then, at some point, that changed. It wasn’t that web feeds vanished overnight. They still exist for many sites. But they are no longer ubiquitous. You can’t guarantee they’ll exist for every site you’re interested in. I remember people saying that social media would replace them. I was never convinced by that argument but, interestingly, one of the first times I noticed them vanishing was when Twitter removed their web feed of a user’s posts. They wanted people to use their AP instead (so I wrote twitter-json2atom that turned their API’s JSON into an Atom feed – I suspect it no longer works). Honestly, I think the main reason for the fall in popularity of web feeds was that people wanted you to read their content on their web sites where the interesting content was surrounded by uninteresting adverts.

But, as I said, not all web feeds vanished. There are still plenty of them out there (often, I expect because the sites’ owners don’t realise they’re there or don’t know how to turn them off). And that means the web feed-driven technologies of the early 2000s can still be useful.

One such piece of technology is the feed aggregator. I remember these being very popular. You would create a web site and configure it with a list of web feeds that you were interested in. The site would be driven by a piece of software that every few hours would poll the web feeds in the configuration and use the information it found to create a) a web page made up of information from the feeds and b) another feed that contained all of the information from the source feeds. The most popular software for building these sites was called Planet Planet and was written in Python (it seems to have vanished sometime in the last twenty years, otherwise I would link to it). When I wrote a Perl version, I called it (for reasons I now regret) Perlanet.

I still use Perlanet to build planet sites. And they’re all listed at The Planetarium. Recently, I’ve started hosting all my planets on GitHub Pages, using GitHub Actions to rebuild the sites periodically. I thought that maybe other people might be old-skool like me and might want to build their own planets – so in the rest of this post I’ll explain how to do that, using Planet Perl as an example.

The first thing you’ll need is a GitHub account and a repo to store the code for your planet. I’m going to assume you know how to set those up (in the interest of keeping this tutorial short). You only actually need two files to create a planet – a config file and a template for the web site.

Here’s part of the config for Planet Perl:

title: Planet Perl
description: There's More Than One Way To Aggregate It
url: https://perl.theplanetarium.org/
author:
  name: Dave Cross
  email: dave@theplanetarium.org
  twitter: davorg
entries: 75
entries_per_feed: 5
opml_file: docs/opml.xml
page:
  file: docs/index.html
  template: index.tt
feed:
  file: docs/atom.xml
  format: Atom
google_ga: G-HD966GMRYP
cutoff_duration:
  months: 1
feeds:
  - feed: https://www.perl.com/article/index.xml
    title: perl.com
    web: https://perl.com/
  - feed: https://news.perlfoundation.org/atom.xml
    title: Perl Foundation News
    web: https://news.perlfoundation.org/

I’ve tried to make it self-explanatory. At the top, there are various config options for the output (the web page and the aggregated feed) and, below, are details of the feeds that you want to aggregate. Let’s look at the output options first.

  • title and description: these are both strings that you can include on the web page that is created. They’re also used in the aggregate feed that is produced
  • url: this is where the web page will be available on the web
  • author: this contains details of the person publishing the aggregated site and feed. The Twitter handle is optional
  • entries: is the maximum number of entries that your output will contain in total
  • entries_per_feed: is the maximum number of entries that you will use from each of your feeds. This is to stop your output being swamped with entries from a particularly busy feed. This can be omitted, in which case there will be no limit
  • opml_file: OPML stands for “Outline Processor Markup File”. It used to be trendy to publish an OPML file which is a machine-readable data file which contains a list of the feeds that you are aggregating. These days, no-one cares. If you omit this setting, the file won’t be created.
  • page: this contains details of the web page you create. The template is the name of a template file that is used to create the HTML page (more on that below) and file is where the output page is written. If you keep the value used in my example, then things will work well with GitHub Pages as we’ll see later
  • feed: this contains details of the aggregate feed we create. You can choose a format (Atom or RSS) and the filename. Again, the default filename will work well with GitHub Pages
  • google_ga: if this value exists, then it will be used as the Google Analytics identifier for the web page that is created
  • cutoff_duration: this is another way to control which entries are used in your output feed. Any entries that were published longer ago than this period of time will be ignored

Then we have the section of the config file that defines the feeds that we are going to aggregate. Each feed has three data items:

  • feed: the URL of the feed
  • title: a string to use to describe the feed
  • web: the URL of the feed’s original web page

And that’s all you need for the config file. Create that, put it in a file called “perlanetrc” and add it to your repo.

The other file you need is the template for the HTML page. This is usually called “index.tt”. The one I use for Planet Perl is rather complicated (there are all sorts of Javascript tricks in it). The one I use for Planet Davorg is far simpler – and should work well with the config file above. I suggest going with that initially and editing it once you’ve got everything else working.

I said those are the only two files you need. And that’s true. But the site you create will be rather ugly. My default web page uses Bootstrap for CSS, but you’ll probably want to add your own CSS to tweak the way it looks – along with, perhaps, some Javascript and some images. All of the files that you need to make your site work should be added to the /docs directory in your repo.

Having got to this stage, we can test your web site. Well, we’ll need to install Perlanet first. There are two ways to do this. You can either install it from CPAN along with all of its (many) dependencies – using “cpan Perlanet” or there’s a Docker image that you can use. Either way, once you have the software installed, running it is as simple as running “perlanet”. That will trundle along for a while and, when it has finished, you’ll find new files called “index.html” and “atom.xml” in the /docs directory. My favourite way to test the output locally is to use App::HTTPThis. Having installed this program, you can just run “http_this docs” from the repo’s main directory and then visit http://localhost:7007/index.html to see the site that was produced (or http://localhost:7007/atom.xml to see the feed.

You now have a system to build your new planet. You could run that on a server that’s connected to the internet and set up a cronjob to regenerate the file every few hours. And that’s how I used to run all of my planets. But, recently, I’ve moved to running them on GitHub Pages instead. And that’s what we’ll look at next.

There are two parts to this. We need to configure our repo to have a GitHub Pages site associated with it and we also need to configure GitHub Actions to rebuild the site every few hours. Let’s take those two in turn.

Turning on GitHub Pages is simple enough. Just go to the “Pages” section in your repo’s settings. Choose “GitHub Actions” as the deployment source and tick the box marked “Enforce HTTPS”. Later on, you can look at setting up a custom domain for your site but, for now, let’s stick with the default URL which will be https://<github_username>.github.io/<repo_name>. Nothing will appear yet, as we need to set up GitHub Actions next.

Setting up a GitHub Action workflow is as simple as adding a YAML file to the /.github/workflows directory in your repo. You’ll obviously have to create that directory first. Here’s the workflow definition for Planet Perl (it’s in a file called “buildsite.yml”, but that name isn’t important).

name: Generate web page

on:
  push:
    branches: '*'
  schedule:
    - cron: '37 */4 * * *'
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    container: davorg/perl-perlanet:latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Create pages
      run: |
        mkdir -p docs
        perlanet > perlanet.log 2>&1
    - name: Commit new page
      if: github.repository == 'davorg/planetperl'
      run: |
        git config --global --add safe.directory /__w/planetperl/planetperl
        GIT_STATUS=$(git status --porcelain)
        echo $GIT_STATUS
        git config user.name github-actions[bot]
        git config user.email 41898282+github-actions[bot]@users.noreply.github.com
        git add docs/
        if [ "$GIT_STATUS" != "" ]; then git commit -m "Automated Web page generation"; fi
        if [ "$GIT_STATUS" != "" ]; then git push; fi
    - name: Archive perlanet logs
      uses: actions/upload-artifact@v3
      with:
        name: perlanet.log
        path: ./perlanet.log
        retention-days: 3

    - name: Update pages artifact
      uses: actions/upload-pages-artifact@v1
      with:
        path: docs/

  deploy:
    needs: build
    permissions:
      pages: write
      id-token: write
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to GitHub Pages
        id: deployment
        uses: actions/deploy-pages@v2

The first section of the file defines the events that will trigger this workflow. I have defined three triggers:

  1. Pushing a commit. I could be cleverer here and only work when certain files are changed (for example, the config or the index.tt)
  2. On a schedule. My example runs at 37 minutes past the hour every four hours (so at 04:37, 08:37, etc.)
  3. Manually. The “workflow_dispatch” trigger adds a button to the repo’s “Actions” page on GitHub allowing you to run the workflow manually, whenever you want

Following that, we define the jobs that need to be run and the steps that make up those jobs. We have two jobs – one that builds the new version of the site and one that deploys that new site to GitHub Pages. Remember how I mentioned earlier that there is a Perlanet container on the Docker Hub? Well, you’ll see that the build job runs on that container. This is because pulling a container from the Docker Hub is faster than using a standard Ubuntu container and installing Perlanet.

The steps in these jobs should be pretty self-explanatory. Basically, we check out the repo, run “perlanet” to build the site and then deploy the contents of the /docs directory to the GitHub Pages server.

Once you’ve created this file and added it to your repo, you’ll see details of this workflow on the “Actions” tab in your repo. And whenever you push a change or when a scheduled run takes place (or you press the manual run button) you’ll see logs for the run and (hopefully) your web site will update to contain the latest data.

I reckon you can get a new planet up and running in about half an hour. Oh, and if you label your repo with the topic “perlanet”, then it will automatically be added to The Planetarium.

So, what are you waiting for? What planet would you like to build?

The post Building Planets with Perlanet and GitHub appeared first on Perl Hacks.

Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.

Douglas Adams – The Hitchhiker’s Guide to the Galaxy

I don’t still wear a digital watch, but I do like other things that are almost as unhip. In particular, I pine for the time about twenty years ago when web feeds looked like they were about to take over the world. Everyone had their favourite feed reader (I still miss Google Reader) and pretty much any useful web site would produce one or more web feeds that you could subscribe to and follow through your feed reader. For a few years, it was almost unthinkable to produce a web site without publishing a feed which included the changes to the site’s content.

Then, at some point, that changed. It wasn’t that web feeds vanished overnight. They still exist for many sites. But they are no longer ubiquitous. You can’t guarantee they’ll exist for every site you’re interested in. I remember people saying that social media would replace them. I was never convinced by that argument but, interestingly, one of the first times I noticed them vanishing was when Twitter removed their web feed of a user’s posts. They wanted people to use their AP instead (so I wrote twitter-json2atom that turned their API’s JSON into an Atom feed – I suspect it no longer works). Honestly, I think the main reason for the fall in popularity of web feeds was that people wanted you to read their content on their web sites where the interesting content was surrounded by uninteresting adverts.

But, as I said, not all web feeds vanished. There are still plenty of them out there (often, I expect because the sites’ owners don’t realise they’re there or don’t know how to turn them off). And that means the web feed-driven technologies of the early 2000s can still be useful.

One such piece of technology is the feed aggregator. I remember these being very popular. You would create a web site and configure it with a list of web feeds that you were interested in. The site would be driven by a piece of software that every few hours would poll the web feeds in the configuration and use the information it found to create a) a web page made up of information from the feeds and b) another feed that contained all of the information from the source feeds. The most popular software for building these sites was called Planet Planet and was written in Python (it seems to have vanished sometime in the last twenty years, otherwise I would link to it). When I wrote a Perl version, I called it (for reasons I now regret) Perlanet.

I still use Perlanet to build planet sites. And they’re all listed at The Planetarium. Recently, I’ve started hosting all my planets on GitHub Pages, using GitHub Actions to rebuild the sites periodically. I thought that maybe other people might be old-skool like me and might want to build their own planets – so in the rest of this post I’ll explain how to do that, using Planet Perl as an example.

The first thing you’ll need is a GitHub account and a repo to store the code for your planet. I’m going to assume you know how to set those up (in the interest of keeping this tutorial short). You only actually need two files to create a planet – a config file and a template for the web site.

Here’s part of the config for Planet Perl:

title: Planet Perl
description: There's More Than One Way To Aggregate It
url: https://perl.theplanetarium.org/
author:
  name: Dave Cross
  email: dave@theplanetarium.org
  twitter: davorg
entries: 75
entries_per_feed: 5
opml_file: docs/opml.xml
page:
  file: docs/index.html
  template: index.tt
feed:
  file: docs/atom.xml
  format: Atom
google_ga: G-HD966GMRYP
cutoff_duration:
  months: 1
feeds:
  - feed: https://www.perl.com/article/index.xml
    title: perl.com
    web: https://perl.com/
  - feed: https://news.perlfoundation.org/atom.xml
    title: Perl Foundation News
    web: https://news.perlfoundation.org/

I’ve tried to make it self-explanatory. At the top, there are various config options for the output (the web page and the aggregated feed) and, below, are details of the feeds that you want to aggregate. Let’s look at the output options first.

  • title and description: these are both strings that you can include on the web page that is created. They’re also used in the aggregate feed that is produced
  • url: this is where the web page will be available on the web
  • author: this contains details of the person publishing the aggregated site and feed. The Twitter handle is optional
  • entries: is the maximum number of entries that your output will contain in total
  • entries_per_feed: is the maximum number of entries that you will use from each of your feeds. This is to stop your output being swamped with entries from a particularly busy feed. This can be omitted, in which case there will be no limit
  • opml_file: OPML stands for “Outline Processor Markup File”. It used to be trendy to publish an OPML file which is a machine-readable data file which contains a list of the feeds that you are aggregating. These days, no-one cares. If you omit this setting, the file won’t be created.
  • page: this contains details of the web page you create. The template is the name of a template file that is used to create the HTML page (more on that below) and file is where the output page is written. If you keep the value used in my example, then things will work well with GitHub Pages as we’ll see later
  • feed: this contains details of the aggregate feed we create. You can choose a format (Atom or RSS) and the filename. Again, the default filename will work well with GitHub Pages
  • google_ga: if this value exists, then it will be used as the Google Analytics identifier for the web page that is created
  • cutoff_duration: this is another way to control which entries are used in your output feed. Any entries that were published longer ago than this period of time will be ignored

Then we have the section of the config file that defines the feeds that we are going to aggregate. Each feed has three data items:

  • feed: the URL of the feed
  • title: a string to use to describe the feed
  • web: the URL of the feed’s original web page

And that’s all you need for the config file. Create that, put it in a file called “perlanetrc” and add it to your repo.

The other file you need is the template for the HTML page. This is usually called “index.tt”. The one I use for Planet Perl is rather complicated (there are all sorts of Javascript tricks in it). The one I use for Planet Davorg is far simpler – and should work well with the config file above. I suggest going with that initially and editing it once you’ve got everything else working.

I said those are the only two files you need. And that’s true. But the site you create will be rather ugly. My default web page uses Bootstrap for CSS, but you’ll probably want to add your own CSS to tweak the way it looks – along with, perhaps, some Javascript and some images. All of the files that you need to make your site work should be added to the /docs directory in your repo.

Having got to this stage, we can test your web site. Well, we’ll need to install Perlanet first. There are two ways to do this. You can either install it from CPAN along with all of its (many) dependencies – using “cpan Perlanet” or there’s a Docker image that you can use. Either way, once you have the software installed, running it is as simple as running “perlanet”. That will trundle along for a while and, when it has finished, you’ll find new files called “index.html” and “atom.xml” in the /docs directory. My favourite way to test the output locally is to use App::HTTPThis. Having installed this program, you can just run “http_this docs” from the repo’s main directory and then visit http://localhost:7007/index.html to see the site that was produced (or http://localhost:7007/atom.xml to see the feed.

You now have a system to build your new planet. You could run that on a server that’s connected to the internet and set up a cronjob to regenerate the file every few hours. And that’s how I used to run all of my planets. But, recently, I’ve moved to running them on GitHub Pages instead. And that’s what we’ll look at next.

There are two parts to this. We need to configure our repo to have a GitHub Pages site associated with it and we also need to configure GitHub Actions to rebuild the site every few hours. Let’s take those two in turn.

Turning on GitHub Pages is simple enough. Just go to the “Pages” section in your repo’s settings. Choose “GitHub Actions” as the deployment source and tick the box marked “Enforce HTTPS”. Later on, you can look at setting up a custom domain for your site but, for now, let’s stick with the default URL which will be https://.github.io/. Nothing will appear yet, as we need to set up GitHub Actions next.

Setting up a GitHub Action workflow is as simple as adding a YAML file to the /.github/workflows directory in your repo. You’ll obviously have to create that directory first. Here’s the workflow definition for Planet Perl (it’s in a file called “buildsite.yml”, but that name isn’t important).

name: Generate web page

on:
  push:
    branches: '*'
  schedule:
    - cron: '37 */4 * * *'
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    container: davorg/perl-perlanet:latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Create pages
      run: |
        mkdir -p docs
        perlanet > perlanet.log 2>&1
    - name: Commit new page
      if: github.repository == 'davorg/planetperl'
      run: |
        git config --global --add safe.directory /__w/planetperl/planetperl
        GIT_STATUS=$(git status --porcelain)
        echo $GIT_STATUS
        git config user.name github-actions[bot]
        git config user.email 41898282+github-actions[bot]@users.noreply.github.com
        git add docs/
        if ["$GIT_STATUS" != ""]; then git commit -m "Automated Web page generation"; fi
        if ["$GIT_STATUS" != ""]; then git push; fi
    - name: Archive perlanet logs
      uses: actions/upload-artifact@v3
      with:
        name: perlanet.log
        path: ./perlanet.log
        retention-days: 3

    - name: Update pages artifact
      uses: actions/upload-pages-artifact@v1
      with:
        path: docs/

  deploy:
    needs: build
    permissions:
      pages: write
      id-token: write
    environment:
      name: github-pages
      url: ${{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    steps:
      - name: Deploy to GitHub Pages
        id: deployment
        uses: actions/deploy-pages@v2

The first section of the file defines the events that will trigger this workflow. I have defined three triggers:

  1. Pushing a commit. I could be cleverer here and only work when certain files are changed (for example, the config or the index.tt)
  2. On a schedule. My example runs at 37 minutes past the hour every four hours (so at 04:37, 08:37, etc.)
  3. Manually. The “workflow_dispatch” trigger adds a button to the repo’s “Actions” page on GitHub allowing you to run the workflow manually, whenever you want

Following that, we define the jobs that need to be run and the steps that make up those jobs. We have two jobs – one that builds the new version of the site and one that deploys that new site to GitHub Pages. Remember how I mentioned earlier that there is a Perlanet container on the Docker Hub? Well, you’ll see that the build job runs on that container. This is because pulling a container from the Docker Hub is faster than using a standard Ubuntu container and installing Perlanet.

The steps in these jobs should be pretty self-explanatory. Basically, we check out the repo, run “perlanet” to build the site and then deploy the contents of the /docs directory to the GitHub Pages server.

Once you’ve created this file and added it to your repo, you’ll see details of this workflow on the “Actions” tab in your repo. And whenever you push a change or when a scheduled run takes place (or you press the manual run button) you’ll see logs for the run and (hopefully) your web site will update to contain the latest data.

I reckon you can get a new planet up and running in about half an hour. Oh, and if you label your repo with the topic “perlanet”, then it will automatically be added to The Planetarium.

So, what are you waiting for? What planet would you like to build?

The post Building Planets with Perlanet and GitHub appeared first on Perl Hacks.

I've written before about how I use GitHub Workflows to keep "semi-static" web sites up to date. It's a technique that I've found really useful. When I wrote that blog post, things were pretty simple - you chose which branch held your web site (there was a tradition for a while to use gh-pages) and whether the web site pages were in the repos root directory or the directory called /docs. I usually put my web site files into the /docs directory in the master (now main) branch and things worked just fine.

The reason for storing the site in /docs was so that there was a separation between the files that were used to generate the site from the generated output site itself. Many of my repos would have a /tt directory that contained templates, a /data directory which contains JSON files or an SQLite database and a /bin directory with a build program that pulls all that stuff together and generates a pile of HTML files that end up in the /docs directory. In my original blog post on this subject, I demonstrated a GitHub Workflow definition that would regenerate the site (when input files changed or on a schedule) and committed any changed files in the /docs directory. Some GitHub magic would then ensure that the new version of the site was deployed to the GitHub Pages server. All was well with the world.

Then, a few months ago, things got a little more complicated. We gained options about how your GitHub Pages site was deployed. The standard version that I'd be using before was called "deploy from a branch" but there was another option called "GitHub Actions". It seemed likely to me that I really needed to start using the "GitHub Actions" option, but things were still working the old way, and I had far more interesting things to investigate, so I left things the way they were.

Well, I say things were still working in the old way... They were, but something was a bit different. It seemed that the old method was being powered by a new GitHub Workflow called "pages-build-deployment" that had been automatically added to all the repos that needed it. And looking into the details of that workflow, I noticed that it was doing some things that were unnecessary in my repos - for example it assumed that the site was being built using Jekyll and that was only true for a couple of my repos. For most of them, that was unnecessary work. So I needed to look into the new deployment option in more detail.

I started a couple of weeks ago, by simply switching the option from "deploy from a branch" to "GitHub Actions" in the hope that, because I was already using GitHub Actions, things would Just Work. But, unfortunately, that wasn't the case. My new site was being generated and committed to the repo - but the changes weren't showing up on the live site. So I switched things back until I had time to look into in it more detail.

That time was today. It seemed that I needed to include code in my GitHub Workflow that would actually handle the deployment of the site to the GitHub Pages servers. A quick search of the GitHub Actions marketplace found the Deploy GitHub Pages site action which seemed to be the right thing. But reading the documentation, I worked out that it wanted to deploy the site from an artifact, so I needed to create that first. And then I found Upload GitHub Pages artifact which did the right thing. So it was just a case of adding these two actions to my workflows in the correct way.

Previously, my workflows for these sites just needed a single job (called build) but now I added a deploy job which depended on build. For example, the workflow that builds Planet Perl now looks like this:

name: Generate web page

on:
  push:
    branches: '*'
  schedule:
    - cron: '37 */4 * * *'
  workflow_dispatch:

jobs:
  build:
    runs-on: ubuntu-latest
    container: davorg/perl-perlanet:latest

    steps:
    - name: Checkout
      uses: actions/checkout@v3

    - name: Create pages
      run: |
        mkdir -p docs
        perlanet > perlanet.log 2>&1
    - name: Commit new page
      if: github.repository == 'davorg/planetperl'
      run: |
        git config --global --add safe.directory /__w/planetperl/planetperl
        GIT_STATUS=$(git status --porcelain)
        echo $GIT_STATUS
        git config user.name github-actions[bot]
        git config user.email 41898282+github-actions[bot]@users.noreply.github.com
        git add docs/
        if [ "$GIT_STATUS" != "" ]; then git commit -m "Automated Web page generation"; fi
        if [ "$GIT_STATUS" != "" ]; then git push; fi
    - name: Archive perlanet logs
      uses: actions/upload-artifact@v3
      with:
        name: perlanet.log
        path: ./perlanet.log
        retention-days: 3

    - name: Update pages artifact
      uses: actions/upload-pages-artifact@v1
      with:
        path: docs/

  deploy:
    needs: build
    permissions:
      pages: write
      id-token: write
    environment:
      name: github-pages
      url: ${\{ steps.deployment.outputs.page_url }}
    runs-on: ubuntu-latest
    steps:
    - name: Deploy to GitHub Pages
      id: deployment
      uses: actions/deploy-pages@v1

The bits that I've added are the final step in the build job ("Update pages artifact") and the new deploy job. All of the code is largely copied from the documentation of the two actions I mentioned above.

Having made this changes to one of my planet sites, I switched the deployment method and forced the workflow to run. And was very happy to see it ran successfully and the new version of the site appeared at the live URL as soon as the deployment had changed.

This makes me happy as I feel I'm using the GitHub Pages deployment the way that they're supposed to be used. I've updated all of my planet sites to use this method, but I have several other sites that I'll need to get round to switching at some point.

As always when I find out something new about a GitHub feature, it leaves me with a couple of other suggestions for improvements:

  • It's possible to call one workflow from another. The planet workflows are all very similar. I wonder if I can define a single workflow that does all of the work and just call that from the individual workflow definition - passing in parameters to handle the differences.
  • Now I'm deploying the sites from artifacts, there is no need for the generated site to actually exist in the repo. That might well make a few things quite a bit easier.

Anyway, I thought I'd share what I had discovered today. Is anyone else generating web sites this way? How do you do it?

This is a story of one of those nice incidents where something starts off simple, then spirals out of control for a while but, in the end, everyone wins.

On Reddit, a few days ago, someone asked ‘Is there a “Planet Perl” with an RSS feed?’ and a few people replied, pointing out the existence of Planet Perl (which is the first Google result for “Planet Perl”). I’m obviously not marketing that site very well as every time I mention it, I get people (pleasantly) surprised that it exists.

On this occasion, it was Elvin Aslanov who seemed to discover my site for the first time. And, very soon afterwards, he started sending pull requests to add feeds to the site. As a result, we now have three more feeds that are being pulled into the site.

  • Perl on Medium. I’m slightly embarrassed that I hadn’t thought of this myself. I did, after all, once try to start a Perl publication on Medium. I think I must have decided that there are better sites for technical blogging and blanked it from consideration. Medium’s not the busiest of places for Perl bloggers, but there are a few posts there and they’re mostly from people who are outside of the echo chamber – so getting more eyes on their posts is a good idea.
  • Perl questions on Stack Overflow. Another one that would have been obvious if I had thought for a second. I’ve been answering questions on SO for years. It’s a good way to get more perspective on how Perl is being used across the industry. Unfortunately, the feed only includes the titles of the posts – you’ll need to click the link to actually see the question.
  • Perl commits on GitHub. I’m interested in hearing how useful people think this is. I worry slightly that there will be times when the number of commits will overwhelm the other feeds. But maybe that’s a good idea. Perhaps it’s good for more people to see just how busy the Perl 5 Porters are. I’m a bit annoyed that the feed puts everything in a fixed-width font, but not (yet) annoyed enough to do anything about it.

You might know that Planet Perl is driven by Perlanet. So adding new feeds is just a case of adding a few lines to a configuration file. And looking at the pull requests I got from Elvin, showed a potential problem in the way the configuration was laid out. Each feed has three lines of YAML configuration. There’s a title for the feed, a URL for a web page that displays the content of the feed and the URL for the feed itself. They’re called “title”, “web” and “url”. And it’s that last name that’s slightly problematic – it’s just not clear enough. Elvin got “web” and “url” muddled up in one of his PRs and, when I pointed that out to him, he suggested that renaming “url” to “feed” would make things much clearer.

I agreed, and the next day I hacked away for a while before releasing version 3.0.0 of Perlanet. In this version, the “url” key is renamed to “feed”. It still accepts the old name (so older config files will still work) but you’ll get a warning if you try to use a config name in the old config.

I didn’t stop there. Last year, I wrote a blog post about producing a docker image that already had Perlanet installed – so that it was quicker to rebuild my various planets every few hours. Since then I’ve been rebuilding that image every time I updated Perlanet. But it’s been rather a manual process. And because I’m old and decrepit, I can never remember the steps I go through to rebuild it, tag it correctly and push it to the Docker Hub. This means it always takes far longer than it’s supposed to. So this time, I wrote a script to do that for me. And because I now have the kind of mind set that sees GitHub Workflows everywhere I look, I wrote a Workflow definition that builds and publishes the image any time the Dockerfile changes. I guess the next step will be to write an action that automatically updates the Dockerfile (thereby triggering the rebuild) each time I release a new version of Perlanet.

But that’s a problem for another day. For now, I’m happy with the improvements I’ve made to Planet Perl, Perlanet and the Perlanet Docker infrastructure.

The post Improvements to Planet Perl and Perlanet appeared first on Perl Hacks.

This is a story of one of those nice incidents where something starts off simple, then spirals out of control for a while but, in the end, everyone wins.

On Reddit, a few days ago, someone asked ‘Is there a “Planet Perl” with an RSS feed?’ and a few people replied, pointing out the existence of Planet Perl (which is the first Google result for “Planet Perl”). I’m obviously not marketing that site very well as every time I mention it, I get people (pleasantly) surprised that it exists.

On this occasion, it was Elvin Aslanov who seemed to discover my site for the first time. And, very soon afterwards, he started sending pull requests to add feeds to the site. As a result, we now have three more feeds that are being pulled into the site.

  • Perl on Medium. I’m slightly embarrassed that I hadn’t thought of this myself. I did, after all, once try to start a Perl publication on Medium. I think I must have decided that there are better sites for technical blogging and blanked it from consideration. Medium’s not the busiest of places for Perl bloggers, but there are a few posts there and they’re mostly from people who are outside of the echo chamber – so getting more eyes on their posts is a good idea.
  • Perl questions on Stack Overflow. Another one that would have been obvious if I had thought for a second. I’ve been answering questions on SO for years. It’s a good way to get more perspective on how Perl is being used across the industry. Unfortunately, the feed only includes the titles of the posts – you’ll need to click the link to actually see the question.
  • Perl commits on GitHub. I’m interested in hearing how useful people think this is. I worry slightly that there will be times when the number of commits will overwhelm the other feeds. But maybe that’s a good idea. Perhaps it’s good for more people to see just how busy the Perl 5 Porters are. I’m a bit annoyed that the feed puts everything in a fixed-width font, but not (yet) annoyed enough to do anything about it.

You might know that Planet Perl is driven by Perlanet. So adding new feeds is just a case of adding a few lines to a configuration file. And looking at the pull requests I got from Elvin, showed a potential problem in the way the configuration was laid out. Each feed has three lines of YAML configuration. There’s a title for the feed, a URL for a web page that displays the content of the feed and the URL for the feed itself. They’re called “title”, “web” and “url”. And it’s that last name that’s slightly problematic – it’s just not clear enough. Elvin got “web” and “url” muddled up in one of his PRs and, when I pointed that out to him, he suggested that renaming “url” to “feed” would make things much clearer.

I agreed, and the next day I hacked away for a while before releasing version 3.0.0 of Perlanet. In this version, the “url” key is renamed to “feed”. It still accepts the old name (so older config files will still work) but you’ll get a warning if you try to use a config name in the old config.

I didn’t stop there. Last year, I wrote a blog post about producing a docker image that already had Perlanet installed – so that it was quicker to rebuild my various planets every few hours. Since then I’ve been rebuilding that image every time I updated Perlanet. But it’s been rather a manual process. And because I’m old and decrepit, I can never remember the steps I go through to rebuild it, tag it correctly and push it to the Docker Hub. This means it always takes far longer than it’s supposed to. So this time, I wrote a script to do that for me. And because I now have the kind of mind set that sees GitHub Workflows everywhere I look, I wrote a Workflow definition that builds and publishes the image any time the Dockerfile changes. I guess the next step will be to write an action that automatically updates the Dockerfile (thereby triggering the rebuild) each time I release a new version of Perlanet.

But that’s a problem for another day. For now, I’m happy with the improvements I’ve made to Planet Perl, Perlanet and the Perlanet Docker infrastructure.

The post Improvements to Planet Perl and Perlanet appeared first on Perl Hacks.

If you're using GitHub Workflows to automate bits of your development process (test runs, deployment, stuff like that) then you're almost certainly using GitHub Actions as well. Actions are pre-build chunks of functionality that you can use in your workflows and which save you having to write a load of code yourself. I've never, for example, written a workflow definition that doesn't use actions/checkout to checkout the current repo on the container that is running that workflow. Part of my workflow definition will always look like this:

steps:
  - name: Check out code
    uses: actions/checkout@v3

Or, at least, it will look a lot like that. The bit that might change is that v3 on the end there. That is, of course, the version of the action that I'm using. And that will change when the author of the action releases a new version. Well, strictly speaking, it will change when I notice that there's a new version and bother to update the workflow definition.

Recently, I noticed that two of the actions I use frequently (including the checkout action which I use everywhere) had updated their version numbers and I hadn't known about the change. It took a few hours of work to update the version numbers in all of my workflow definitions.

And life isn't supposed to be that much work.

But there's a solution. And I discovered it thanks to a discussion on Reddit.

If you're doing much work on GitHub (particularly in the Javascript/Node) area, then you'll have met Dependabot. It's a bot that scans GitHub repos for dependencies that are out of date and then (and here's the really clever bit!) it generates pull requests that update the repo to use the updated versions.

To be honest, I don't have many projects on GitHub that use the technologies that Dependabot targets by default. The few I have are web sites that use frameworks I've cloned from elsewhere - so I get the occasional PR that updates packages.json and I just apply them and forget about it.

But it seems that Dependabot can do far more than that. You just need to configure it properly in your repo. And one of the things it can do is to scan for out of date actions being used in your workflow definitions. You just need a file called dependabot.yml in your .github directory and the contents should look like this:

# Set update schedule for GitHub Actions

version: 2
updates:

  - package-ecosystem: "github-actions"
    directory: "/"
    schedule:
      # Check for updates to GitHub Actions every week
      interval: "weekly"

Dependabot will then check your actions versions once a week and create PRs for any that are out of date. I know it works, because I added this to all of my repos this morning and it promptly created half a dozen PRs for repos where I had updated the versions but hadn't pushed the changes to GitHub. When I pushed the existing changes, Dependabot checked its PR against the new code, realised the PR was no longer needed and closed the PR. Which was nice :-)

It you're using GitHub Workflows (and I can't really think of any reason why you wouldn't be) then I recommend adding this file to your repos. Of course, GitHub has documentation about this feature.

Dependabot clearly has more uses than I thought it did. I'm going to have to investigate it further. I wonder if it can check CPAN dependencies.

Do you have any Dependabot tricks that I would find useful?

Ratio: The Simple Codes Behind the Craft of Everyday Cooking (1) (Ruhlman's Ratios)
author: Michael Ruhlman
name: David
average rating: 4.06
book published: 2009
rating: 0
read at:
date added: 2023/02/06
shelves: currently-reading
review:

Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.

Here are my ten favourite gigs of the year. As always, they’re in chronological order.

  • Pale Waves at the Roundhouse
    I’ve seen Pale Waves a few times now and I think they’ve firmly established their place on my “see them whenever they tour near me” list. This show was every bit as good as I’ve ever seen them.
  • Orchestral Manoeuvres in the Dark at the Royal Albert Hall
    Another band I see whenever I can. This was a slightly different set where the first half was called “Atmospheric” and concentrated on some deeper cuts from their back catalogue and the second half included all the hits.
  • Chvrches at Brixton Academy
    In 2020, I moved to a flat that’s about fifteen minutes’ walk from Brixton Academy. But I had to wait about eighteen months in order to take advantage of that fact. The last couple of times I’ve seen Chvrches were at Alexandra Palace, so it was nice to see them at a smaller venue again. This show featured a not-entirely unexpected guest appearance from Robert Smith.
  • Sunflower Bean at Electric Ballroom
    Another act who I see live as often as I can. And this was a great venue to see them in.
  • Pet Shop Boys at the O2 Arena
    There’s always one show a year that draws me to the soulless barn that is the O2 Arena. Every time I go there, I vow it’ll be the last time – but something always pulls me back. This year it was the chance to see a band I loved in the 80s and have never seen live. This was a fabulous greatest hits show that had been postponed from 2020.
  • Lorde at the Roundhouse
    A new Lorde album means another Lorde tour. And, like Chvrches, she swapped the huge expanse of Alexandra Palace for multiple nights at a smaller venue. This was a very theatrical show that matched the vibe of the Solar Power album really well.
  • LCD Soundsystem at Brixton Academy
    Another show at Brixton Academy. For some reason, I didn’t know about this show until I walked past the venue a few days before and saw the “sold out” signs. But a day or so later, I got an email from the venue offering tickets. So I snapped one up and had an amazing evening. It was the first time I’d seen them, but I strongly suspect it won’t be the last. That’s them in the photo at the top of this post.
  • Roxy Music at the O2 Arena
    Some years there are two shows that force me to the O2 Arena. And this was one of those years. I’ve been a fan of Roxy Music since the 70s but I’ve never seen them live. Honestly, it would have been better to have seen them in the 70s or 80s, but it was still a great show.
  • Beabadoobee at Brixton Academy
    Sometimes you go to see an artist because of one song and it just works out. This was one of those nights. In fact, it turns out I didn’t actually know “Coffee For Your Head” very well – I just knew the sample that was used in another artist’s record. But this was a great night and I hope to see her again very soon.
  • Sugababes at Eventim Apollo
    Another night of fabulous nostalgia. The Eventim Apollo seems to have become my venue of choice to see re-formed girl groups from the 80s and 90s – having seen Bananarama, All Saints and now The Sugababes there in recent years. They have a surprising number of hits (far more than I remembered before the show) and they put on a great show.

Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.

A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.

But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.

Did you see any great shows this year? Tell us about them in the comments.

The post 2022 in Gigs appeared first on Davblog.

Dave Cross posted a photo:

Goodbye Vivienne

via Instagram instagr.am/p/CmyT_MSNR3-/

Dave Cross posted a photo:

Low sun on Clapham Common this morning

via Instagram instagr.am/p/Cmv4y1eNiPn/

Dave Cross posted a photo:

There are about a dozen parakeets in this tree. I can hear them and (occasionally) see them

via Instagram instagr.am/p/Cmv4rUAta58/

Dave Cross posted a photo:

Sunrise on Clapham Common

via Instagram instagr.am/p/Cmq759NtKtE/

ChatGPT exposes an API, but there’s no CPAN module for taking advantage of that. Let’s put that right (with help from ChatGPT)…


Write a Perl CPAN module for connecting to a ChatGPT server

To create a Perl CPAN module for connecting to a ChatGPT server, you will need to have Perl and the necessary dependencies installed on your system.

To begin, create a new directory for your module and create a file called “ChatGPT.pm” with the following content:

package ChatGPT;

use strict;
use warnings;

use Net::WebSocket::Client;
use Net::WebSocket::Server;
use HTTP::Async;

sub new {
  my ($class, %params) = @_;
  my $self = {
    host => $params{host},
    port => $params{port},
  };

  bless $self, $class;

  return $self;
}

sub send_prompt {
  my ($self, $prompt) = @_;

  my $http = HTTP::Async->new;
  my $client = Net::WebSocket::Client->new(
    url => "ws://$self->{host}:$self->{port}"
  );

  $client->connect;
  $client->send_utf8($prompt);

  my $response = $client->recv;

  $client->disconnect;

  return $response;
}

1;

To use this module in your Perl code, you can create a new ChatGPT object and call the send_prompt method like this:

use ChatGPT;

my $chatgpt = ChatGPT->new(host => 'localhost', port => 8080);
my $response = $chatgpt->send_prompt('What is your name?');
print "Response: $response\n";

This will send the text prompt “What is your name?” to the ChatGPT server running on localhost at port 8080, and print the response to the console.

Note: This is just a basic example of how you can create a Perl CPAN module for connecting to a ChatGPT server. There are many other features and considerations that you may want to add to your module, such as error handling and customization options.

To make your module available on CPAN, you will need to follow the guidelines for creating and distributing CPAN modules, which can be found in the Perl documentation and online resources.

Use Moose to create the class

To use Moose to create the ChatGPT class, you will need to have Moose installed on your system. You can install Moose by running the following command:

cpanm Moose

With Moose installed, you can modify the class like this:

package ChatGPT;

use Moose;
use Net::WebSocket::Client;
use Net::WebSocket::Server;
use HTTP::Async;

has host => (
  is => 'ro',
  isa => 'Str',
  required => 1,
);

has port => (
  is => 'ro',
  isa => 'Int', required => 1,
);

sub send_prompt {
  my ($self, $prompt) = @_;

  my $http = HTTP::Async->new;
  my $client = Net::WebSocket::Client->new(
    url => "ws://$self->{host}:$self->{port}"
  );

  $client->connect; $client->send_utf8($prompt);
  my $response = $client->recv;
  $client->disconnect;

  return $response;
}

1;

In this version of the class, the host and port attributes are defined using Moose’s has keyword. These attributes are required and must be specified when creating a new ChatGPT object. The send_prompt method is unchanged from the previous example.

To create a new ChatGPT object, you can use the new method provided by Moose like this:

use ChatGPT;

my $chatgpt = ChatGPT->new(host => 'localhost', port => 8080);

my $response = $chatgpt->send_prompt('What is your name?');

print "Response: $response\n";

This will create a new ChatGPT object with the specified host and port attributes, and send the text prompt “What is your name?” to the ChatGPT server running on localhost at port 8080. The response will be printed to the console.

Note: This is just one way to use Moose to create the ChatGPT class. There are many other features and customization options available in Moose that you may want to consider adding to your class, such as type constraints, default values, and methods.

Call the class WebService::ChatGPT

To rename the ChatGPT class to WebService::ChatGPT, you will need to update the package name at the top of the file and update the use statements to refer to the new package name.

Here is how you can update the ChatGPT class to use the new package name:

package WebService::ChatGPT;

use Moose;
use Net::WebSocket::Client;
use Net::WebSocket::Server;

use HTTP::Async;

has host => (
  is => 'ro',
  isa => 'Str',
  required => 1,
);

has port => (
  is => 'ro',
  isa => 'Int',
  required => 1,
);

sub send_prompt {
  my ($self, $prompt) = @_;

  my $http = HTTP::Async->new;
  my $client = Net::WebSocket::Client->new(
    url => "ws://$self->{host}:$self->{port}"
  );

  $client->connect; $client->send_utf8($prompt);
  my $response = $client->recv;
  $client->disconnect;

  return $response;
}

1;

With these changes, the class is now called WebService::ChatGPT and can be used like this:

use WebService::ChatGPT;

my $chatgpt = WebService::ChatGPT->new(host => 'localhost', port => 8080);

my $response = $chatgpt->send_prompt('What is your name?');
print "Response: $response\n";

This will create a new WebService::ChatGPT object with the specified host and port attributes, and send the text prompt “What is your name?” to the ChatGPT server running on localhost at port 8080. The response will be printed to the console.,/p>

Note: This is just one way to rename the ChatGPT class to WebService::ChatGPT. You may also need to update any other references to the class name in your code and documentation, as well as update the module file name and directory structure to reflect the new package name.

Suggest some tests using the standard Perl testing framework

Here are some examples of tests that you could write for the WebService::ChatGPT module using the standard Perl testing framework (Test::More):

use Test::More;
use WebService::ChatGPT;

# Test creating a new ChatGPT object
my $chatgpt = WebService::ChatGPT->new(host => 'localhost', port => 8080);
isa_ok($chatgpt, 'WebService::ChatGPT', 'Object is a ChatGPT');

# Test sending a prompt and receiving a response
my $response = $chatgpt->send_prompt('What is your name?');
is($response, 'My name is ChatGPT.', 'Response is correct');

# Test sending an invalid prompt
eval { $chatgpt->send_prompt('') };
like($@, qr/Invalid prompt/, 'Invalid prompt error is thrown');

done_testing();

The first test uses the isa_ok function to check that the object created by the new method is an instance of the WebService::ChatGPT class.

The second test uses the is function to compare the expected response to the actual response returned by the send_prompt method.

The third test uses the eval function to catch any exceptions thrown by the send_prompt method when an invalid prompt is provided, and the like function to check that the exception message matches the expected error message.

These tests can be run using the prove command:

prove -l t/chatgpt.t


I haven’t uploaded it to CPAN yet. Do you think I should?

The post Writing a CPAN module that talks to ChatGPT appeared first on Perl Hacks.

Dave Cross posted a photo:

Brixton Academy

via Instagram instagr.am/p/CmOfgfLtwL_/

Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.

Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.

First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.

Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.

Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.

Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.

In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.

[This post was generated using ChatGPT]

The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.

I’ve been building Docker containers again. And I think you’ll find this one a little more useful than the Perlanet one I wrote about a couple of weeks ago.

Several years ago I got into Travis CI and set up lots of my GitHub repos so they automatically ran the tests each time I committed to the repo. Later on, I also worked out how to tie those test runs into Coveralls.io so I got pretty graphs of how my test coverage was looking. I gave a talk about what I had done.

But two things changed.

Firstly, Travis CI got too popular and, eventually, removed their free service. And, secondly, GitHub Actions was introduced. Over the last few years, I’ve set up many of my repos to use GitHub Actions for CI. But, basically because I’m lazy, I didn’t remove the Travis CI configuration from those repos.

But last week I decided the time was right to start work on that. And when I went to remove the .travis.yml I realised that something was missing from my GitHub Actions CI workflows — they were running the unit tests, but they weren’t reporting on test coverage. So it was time to fix that.

I needed to reimplement the logic that connected Travis CI to Coveralls.io in a GitHub workflow. That actually turned out to be pretty simple. There’s a CPAN module called Devel::Cover::Report::Coveralls which takes the output from Devel::Cover, converts it to the correct format and sends it to Coveralls.io. And, as a bonus, it has documentation showing how to implement that in a GitHub workflow.

So I hacked at my workflow definition file for one of my CPAN modules and within a few minutes I had it working.

Well, I say “a few minutes”, but it took over thirteen minutes to run. It turns out that Devel::Cover::Report::Coveralls is a pretty heavyweight module and needs to install a lot of other modules in order to do its work.

At this point, you can probably guess where this is going. And you’d be right.

I’ve created a Docker container that has Devel::Cover::Report::Coveralls already installed. And, obviously, it’s available for everyone to use from the Docker hub — davorg/perl-coveralls.

A couple of small adjustments to my GitHub workflow and the coverage job is now running on my new container — and takes 29 seconds instead of 13 minutes. So that’s a win.

The relevant section of my workflow file is here:

coverage:
runs-on: ubuntu-latest
container: davorg/perl-coveralls:latest
name: Test coverage
steps:
- uses: actions/checkout@v3
- name: Install modules
run: cpanm -n --installdeps .
- name: Coverage
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: cover -test -report Coveralls

And it’s producing nice graphs on Coveralls.io like the one above.

Let me know if you find it useful.

Originally published at Perl Hacks.

I’ve been building Docker containers again. And I think you’ll find this one a little more useful than the Perlanet one I wrote about a couple of weeks ago.

Several years ago I got into Travis CI and set up lots of my GitHub repos so they automatically ran the tests each time I committed to the repo. Later on, I also worked out how to tie those test runs into Coveralls.io so I got pretty graphs of how my test coverage was looking. I gave a talk about what I had done.

But two things changed.

Firstly, Travis CI got too popular and, eventually, removed their free service. And, secondly, GitHub Actions was introduced. Over the last few years, I’ve set up many of my repos to use GitHub Actions for CI. But, basically because I’m lazy, I didn’t remove the Travis CI configuration from those repos.

But last week I decided the time was right to start work on that. And when I went to remove the .travis.yml I realised that something was missing from my GitHub Actions CI workflows – they were running the unit tests, but they weren’t reporting on test coverage. So it was time to fix that.

I needed to reimplement the logic that connected Travis CI to Coveralls.io in a GitHub workflow. That actually turned out to be pretty simple. There’s a CPAN module called Devel::Cover::Report::Coveralls which takes the output from Devel::Cover, converts it to the correct format and sends it to Coveralls.io. And, as a bonus, it has documentation showing how to implement that in a GitHub workflow.

So I hacked at my workflow definition file for one of my CPAN modules and within a few minutes I had it working.

Well, I say “a few minutes”, but it took over thirteen minutes to run. It turns out that Devel::Cover::Report::Coveralls is a pretty heavyweight module and needs to install a lot of other modules in order to do its work.

At this point, you can probably guess where this is going. And you’d be right.

I’ve created a Docker container that has Devel::Cover::Report::Coveralls already installed. And, obviously, it’s available for everyone to use from the Docker hub – davorg/perl-coveralls.

A couple of small adjustments to my GitHub workflow and the coverage job is now running on my new container – and takes 29 seconds instead of 13 minutes. So that’s a win.

The relevant section of my workflow file is here:

coverage:
    runs-on: ubuntu-latest
    container: davorg/perl-coveralls:latest
    name: Test coverage
    steps:
      - uses: actions/checkout@v3
      - name: Install modules
        run: cpanm -n --installdeps .
      - name: Coverage
        env:
          GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
        run: cover -test -report Coveralls

And it’s producing nice graphs on Coveralls.io like the one above.

Let me know if you find it useful.

The post Containers for Coverage appeared first on Perl Hacks.

It’s October. And that means that Hacktoberfest has started. If you can get four pull requests accepted on other people’s code repositories during October then you can win a t-shirt.

In many ways, I think it’s a great idea. It encourages people to get involved in open source software. But in other ways, it can be a bit of a pain in the arse. Some people go crazy for a free t-shirt and that means you’ll almost certainly get several pull requests that aren’t really of the quality you’d hope for.

I have a particular problem that probably isn’t very common. I’ve talked before about the “semi-static” sites I run on GitHub Pages. There’s some data in a GitHub Repo and every couple of hours the system wakes up and runs some code which generates a few HTML pages and commits those HTML pages into the repo’s “/docs” directory. And — hey presto! — there’s a new version of your web site.

A good example is Planet Perl. The data is a YAML file which mostly consists of a list of web feeds. Every couple of hours we run perlanet to pull in those web feeds and build a new version of the web site containing the latest articles about Perl.

Can you see what the problem is?

The problem is that the most obvious file in the repo is the “index.html” which is the web site. So when people find that repo and want to make a small change to the web site they’ll change that “index.html” file. But that file is generated. Every few hours, any changes to that file are overwritten as a new version is created. You actually want to change “index.tt”. But that uses Template Toolkit syntax, so it’s easy enough to see why people with no Perl knowledge might want to avoid editing that.

The README file for the project explains which files you might want to change in order to make different types of changes. But people don’t read that. Or, if they do read it, they ignore the bits that they don’t like.

So I get pull requests that I have to reject because they change the wrong files.

Last year I got enough of these problematic pull requests that I decided to automate a solution. And it’s this pretty simple GitHub Workflow. It runs whenever my repo receives a pull request and looks at the files that have been changed. If that list of files includes “docs/index.html” then the PR is automatically closed with a polite message explaining what they’ve done wrong.

This makes my life easier. It’s possible it might make your life easier too.

Originally published at Perl Hacks.

‘Okay Google. Where is Antarctica?”

Children can now get answers to all their questions using smart speakers and digital voice assistants.

A few years ago, children would run to their parents or grandparents to answer their questions. But with the ascendence of voice assistants to the mainstream in recent years, many children rely more on technology than humans.

Is this a good idea?

How does it impact the children?

When children interact with people, it helps them be more thoughtful, creative, and imaginative.

When they use artificial intelligence instead, several issues come into the foreground. These include access to age-inappropriate content and increasing the possibility of being rude or unpleasant, affecting how they treat others.

As mentioned, technology has both pros and cons. There are benefits to children using these devices, including improving diction, communication, social skills, and gaining information without bothering their parents.

Many families find that smart speakers like Amazon Echo and Google Home are useful. They use them for several functions, ranging from answering questions to setting the thermostat. Research shows that up to nine out of ten children between the ages of four and eleven in the US are regularly using smart speakers — often without parental guidance and control. So, what is the best approach for a parent to take?

Children up to seven years old can find it challenging to differentiate between humans and devices, and this can lead to one of the biggest dangers. If the device fulfils their requests through rude behaviour, children may behave similarly to other humans.

Do Parents Think Smart Devices Should Encourage Polite Conversations?

Most parents consider it essential that smart devices should encourage polite conversations as a part of nurturing good habits in children. The Campaign for a Commercial-Free Childhood or CCFA is a US coalition of concerned parents, healthcare professionals, and educators. Recently, CCFA protested against Amazon Echo Dot Kids Edition, stating that it may affect children’s wellbeing. Because of this, they requested parents avoid buying Amazon Echo.

However, in reality, these smart devices have improved a lot and focus on encouraging polite conversations with children. It is all about how parents use and present these devices to their children, as these factors can influence them a lot.

But in simple terms, parents wish these devices to encourage politeness in their children. At the same time, they want their kids to understand the difference between artificial intelligence and humans while using these technological innovations.

Do Parents Think Their Children are Less Polite While Using Smart Speakers?

Many parents have seen their children behave rudely to smart speakers. Several parents have expressed their concerns through social media, blog posts and forums like Mumsnet. They fear these behaviours can impact their kids when they grow up.

A report published in Child Wise reached the conclusion that children who behave rudely to smart devices might be aggressive while they grow up, especially while dealing with other humans. It is, therefore, preferable if children use polite words while interacting with both humans and smart devices.

What Approaches Have Been Taken By Tech Companies to Address the Problem?

With interventions and rising concerns addressed by parents and health professionals, some tech companies have brought changes to virtual assistants and smart speakers.

The parental control features available in Alexa focus on training kids to be more polite. Amazon brands it as Magic Word, where the focus is on bringing positive enforcement. However, there is no penalty if children don’t speak politely. Available on Amazon Echo, this tool has added features like setting bedtimes, switching off devices, and blocking songs with explicit lyrics.

When it comes to Google Home, it has brought in a new feature called Pretty Please. Here, Google will perform an action only when children use, please. For instance, “Okay, Google. Please set the timer for 15 minutes.”

You can enable this feature through the Google Family Link, where you can find the settings for Home and Assistant. You can set these new standards for devices of your preference. Also, once you use it and figure things out, there will be no more issues in setting it up again.

These tools and their approaches are highly beneficial for kids and parents. As of now, these devices only offer basic features and limited replies. But with time, there could be technological changes that encourage children to have much more efficient and polite interactions.

George and the Smart Home

It was thinking about issues like this which led me to write my first children’s book — George and the Smart Home. In the book, George is a young boy who has problems getting the smart speakers in his house to do what he wants until he learns to be polite to them.

It is available now, as a paperback and a Kindle book, from Amazon.

Buy it from: AU / BR / CA / DE / ES / FR / IN / IT / JP / MX / NL / UK / US

The post Should Children be Polite While Using Smart Speakers? appeared first on Davblog.

S.

S.
author: J.J. Abrams
name: David
average rating: 3.86
book published: 2013
rating: 0
read at:
date added: 2022/01/16
shelves: currently-reading
review:

A little later than usual, here’s my review of the gigs I saw last year.

In 2020, I saw four gigs. In 2021, I almost doubled that to seven. Obviously, we spent a lot of the year with most music venues closed, so those few gigs I saw were all in the second half of the year. Usually, I’d list my top ten gigs. This year (as last year) I’ll be listing them all. So here they are in chronological order.

  • Tubular Bells at the Royal Festival Hall
    This was a strange show for several reasons. Firstly, it was advertised as commemorating the fiftieth anniversary of Tubular Bells. But the album was released in 1973, so it was two years early (apparently it was the fiftieth anniversary of when Mike Oldfield started writing the piece). Secondly, Mike Oldfield wasn’t performing – but you needed to examine the publicity very carefully to work that out. And thirdly, there was a troupe of acrobats that were pointlessly leaping around the stage while the musicians played. All in all, I thought this was slightly disappointing.
  • Heaven 17 at the Roundhouse
    Many of these shows were postponed from 2020. This was originally intended to celebrate the fortieth anniversary of the Human League album, Travelogue, but it ended but being the forty-first anniversary. But none of that mattered. This was Heaven 17 playing all of the first two Human League albums and it was absolutely wonderful. Apparently, they had invited Phil Oakey to take part, but he wasn’t interested. That’s Heaven 17 in the photo above.
  • LUMP at the Scala
    LUMP is Laura Marling playing with Tunng’s Mike Lindsay. I kinda assumed that their first album was going to be a one-off, but they produced a second album in 2020. This was the first gig I’d been to in a cramped venue like the Scala for a couple of years and it all got a bit too much for me. I really didn’t enjoy the atmosphere and left during the third or fourth song. I still love the album though and I hope to build up my tolerance for gig crowds over the coming months.
  • The Staves at Shepherd’s Bush Empire
    Actually, this was only two-thirds of the Staves. One of the sisters has has a baby recently and has decided to sit out tours for a couple of years. But the two remaining sisters still put on a great show.
  • Laura Marling at the Roundhouse
    Given how few gigs I saw last year, it’s surprising how repetitive they were. Here’s Laura Marling again (and the Roundhouse again!) Although she has yet to match the heights of the Short Movie tour, Laura Marling is always worth seeing and this show was no exception.
  • Heaven 17 at the Shepherd’s Bush Empire
    More repetition. I think the two Heaven 17 gigs were originally supposed to be several months apart, but the vagaries of the Covid scheduling changes led to them being just two months apart. This one celebrated the fortieth (actually forty-first) anniversary of Heaven 17 starting and was a glorious journey through their back catalogue. Oh, and the support was Pete Wylie, so I can finally say I’ve seen all three members of the Crucial Three live.
  • Orchestral Manoeuvres in the Dark at Hammersmith Apollo
    OMD are just one of those bands that I see live whenever I can. I’ve now been seeing them for over forty years (since they supported Gary Numan in 1980). They have such a massive back catalogue that they can just play hit after hit for two hours. But this show was a bit different as they started by playing all of their 1981 album, Architecture and Morality. They were as good as I’ve ever seen them.

And that was 2021. What will happen in 2022? Well, I have tickets for a dozen or shows but who knows how many of them I’ll actually see? I’ve already had emails postponing the Wolf Alice and Peter Hook shows I was going to see this month. I guess I’ll just have to wait and see how the rest of the year pans out.

The post 2021 in Gigs appeared first on Davblog.

Doctor Who has a new showrunner. But he’s actually an old showrunner. Is that a good idea?

Since the news broke yesterday, Doctor Who fan forums have been discussing nothing but the fact that Russell T Davies is returning as showrunner after Chris Chibnall’s regeneration special is broadcast next year. Most fans seem to be very excited by this prospect; I’m not so sure.

Before I start, I should point out that I’ve been a big fan of Russell T Davies since long before he brought Doctor Who back to our screens in 2005. I’ll always be grateful for the work he did to bring the show back and I believe that he’s responsible for some great moments in Doctor Who history.

But I’m not sure I want to see him back as the showrunner. Let me explain why I’m so out of step with most of the show’s fans.

Firstly, although I’m grateful to him for bringing the show back, he’s not my favourite showrunner. Obviously, any Doctor Who is better than no Doctor Who but there was a lot of stuff in Davies’ first run that I didn’t like. For example, He was the person who first introduced us to companions’ families, which brought a slight soap opera feel to some of the episodes. Also, I thought that he often wrote himself into a bit of a corner. This was most apparent in the end of season two-parters. There were many occasions when the first part set up a fantastic premise only to be let down by a finale that just couldn’t live up to the promise. The Stolen Earth was great; Journey’s End was terrible. Then there’s The End of Time. Again, it started off well but had verged well into the ridiculous by the end of the first part. And don’t get me started on the self-indulgent, mawkish nonsense that made up the last twenty minutes of that story — leading to the Tenth Doctor’s regeneration.

I admit, however, that my opinions on Davies’ writing are purely personal. And, because of the massive rise in popularity of the show during his tenure, many viewers see his approach as the gold standard for how the show should work. My other points are, I hope, less opinion-based.

Secondly, Doctor Who is a show that should always be moving forward. In the classic era of the show, previous Doctors and companions would reappear very rarely. When someone left the show, you knew the chances of seeing them again were very slim. When an executive producer left (we didn’t call them showrunners back then) you knew that the show would change in new and experimental ways. Sometimes the changes didn’t work; most of the time they did. Change is fundamental to the show. It’s how the show has kept going for (most of) sixty years.

The newer sections of the audience don’t seem to realise that. I constantly hear fans wanting things to go back to how things were. As soon as Rose was written out at the end of series two, there were calls for her to come back. And while series four has some pretty good stuff in it, I think that bringing Rose back was pandering to the fanbase in an unhealthy way. We now have a situation where fans expect every character who has been written out of the show to be brought back at their whim. There aren’t very many weeks that pass without me seeing someone in a Facebook group suggesting some convoluted way that David Tennant could be brought back to be the Doctor again.

The show must always move forward. It must always change. I believe that RTD knows that, so I hope that his second era in charge will be sufficiently different to his first. But I worry that fans will start asking for Tennant back as the Doctor with Billie Piper by his side. For some fans, that seems to be the only version of the show they will be happy with.

Finally, I worry about what RTD’s reappointment means for the future of the show. When Chibnall’s departure was announced, all of the news stories claimed that he and Whittaker had a “three and out agreement” between themselves and that he only ever planned to do three years running the show. That’s rather at odds with the talk of him having a five-year plan for the show when he was appointed to the role. I realise that he will have done five years in the post by the time he goes, but he will have made three seasons and a handful of specials — so I’m not sure that counts.

No, I think it’s clear that Chibnall has been hounded out of the role by that toxic sector of the fanbase that refuses to give his work on the show a decent chance. And, given that Moffat also put up with a lot of abuse from certain fans, I begin to wonder how easy it is to find someone to take over the job. Chibnall’s departure was announced at the end of July and the BBC would certainly have known about it for some time before that. But they have failed to find someone new and exciting to take over the job and I wonder if it has become a bit of a poison chalice. People want to do the job because, hey, it’s running Doctor Who! But, on the other hand, if you don’t please the fanbase (and no-one can please all of the fanbase) then you’ll be vilified online and hounded off social media. Add to that the fact that both Davies and Moffat cited insane working schedules as part of their reason for leaving and, suddenly, the job doesn’t look quite as tempting.

I have no inside information here at all, but I wonder if the reappointment of RTD was an act of desperation on the part of the BBC. We know that Chibnall is steering the show up to and including a BBC centenary special that will be broadcast in 2022. But the show’s 60th anniversary is the year after that and without a showrunner, you can’t cast a new Doctor and without a new Doctor in place pretty soon, the 60th-anniversary celebrations would seem to be in danger.

The news of the reappointment has all been very celebratory, of course, but I wonder if that’s actually the case. I wonder if the BBC’s approach to RTD was more like this:

“So, that show you resurrected back in 2005. Well, we can’t find anyone to take over as showrunner, and unless we get things moving pretty quickly we’re not going to have a 60th anniversary worth speaking off. Seriously, we’re thinking of just cancelling it… unless you can suggest something that we could do…”

This, of course, leaves RTD thinking that the only way to save his baby is to step in himself. Maybe he’s stepped in as a stop-gap until the BBC finds someone else to take over. The announcement says he’s signed on for the 60th special and following series. But that’s a bit vague (because the English language doesn’t have a plural for “series”!) so who knows how long he’ll hang around for. Time will tell, I guess.

But, if you’re one of those fans who think it’s big or clever to be unrelentingly negative about the showrunner on social media, please stop and consider whether you’re part of a problem that could end up with no-one wanting the job and the show being cancelled.

All-in-all, I wish that the BBC hadn’t done this. I would have far preferred to see the show moving forward. But if, as I suspect, the alternative was no new Doctor Who for the foreseeable future, then obviously this is a good plan. I’m keen to see what Davies has in store.

But first I’m really excited to see what Chibnall has in store for his final series and the subsequent specials. If series 13 improves on series 12 to the extent that series 12 improved on series 11, then it’s going to be great.

The post The Return of RTD appeared first on Davblog.

The Introvert Entrepreneur
author: Beth Buelow
name: David
average rating: 3.44
book published: 2015
rating: 0
read at:
date added: 2020/01/27
shelves: currently-reading
review:


Some thoughts on ways to measure the quality of Perl code (and, hence, get a basis for improving it)

How (and why) I spent 90 minutes writing a Twitterbot that tweeted the Apollo 11 mission timeline (shifted by 50 years)

A talk from the European Perl Conference 2019 (but not about Perl)
Prawn Cocktail Years
author: Lindsey Bareham
name: David
average rating: 4.50
book published: 1999
rating: 0
read at:
date added: 2019/07/29
shelves: currently-reading
review:

Write. Publish. Repeat. (The No-Luck-Required Guide to Self-Publishing Success)
author: Sean Platt
name: David
average rating: 4.29
book published: 2013
rating: 0
read at:
date added: 2019/06/24
shelves: currently-reading
review:


The slides from a half-day workshop on career development for programmers that I ran at The Perl Conference in Glasgow

A (not entirely serious) talk that I gave at the London Perl Mongers technical meeting in March 2018. It talks about how and why I build a web site listing the line of succession to the British throne back through history.
Dave Cross / Tuesday 06 June 2023 18:03