Powered by Perlanet
8caf4f0
🟩 Perl Diver Auth is up (200 in 551 ms) [skip ci] [upptime]
5b4bdd6
🟥 Melody is down (0 in 0 ms) [skip ci] [upptime]
I’ve mentioned before how much I enjoyed Olaf Alders’ talk, Whither Perl, at the Perl and Raku Conference in Toronto last month. I think it’s well worth spending forty minutes watching it. It triggered a few ideas that I’ll be writing about over the coming weeks and, today, I wanted to start by talking briefly about the idea of GitHub Organisations and introducing a couple of organisations that I’ve recently set up.
Olaf talks about GitHub Organisations as a way to ensure the continuity of your projects. And I think that’s a very important point. I’ve written a few times over recent years about the problems of getting fixes into CPAN modules when the maintainer has gone missing. This is also true of non-CPAN projects that members of the community might find useful. By setting up an organisation and inviting a few trusted people to join it, you can share the responsibility of looking after your projects. So if you lose interest or drift away, there’s a better chance that someone else will be able to take up the reins.
It’s actually a thought that I had before going to Toronto. A couple of months ago, I set up the Perl Tools Team organisation and transferred ownership of four of my repos.
There will, no doubt, be other repos that I’ll want to transfer over to this organisation in time. And, of course, if you have a tool that is used by the Perl community, I’ll be happy to add it to the list. Just get in touch and we can talk about it.
The other organisation (and I set this up just this morning) now owns all of the repos for my CPAN modules. I won’t be adding anybody else’s repos to this organisation, but if you send PRs to any of these projects (and I’m looking at you, Mohammad) then don’t be surprised if you get added to the organisation too! If you’ve watched my talk on GitHub Actions for Perl Development, then you might remember that I was developing a GitHub Workflow definition for releasing modules to CPAN. That’s still a work in progress, but now I’m thinking that I could add my PAUSE credentials to the GitHub secrets store for this organisation and the GitHub workflow could release the code to CPAN using my credentials without my input (but, obviously, I’m still considering the security implications of that – it would certainly only ever be available to me and a few trusted lieutenants).
This is still all very new and it will definitely develop in several directions over the coming months. But it feels like the a move in the right direction. After twenty years of CPAN releases, it feels like I’m turning my work into a “proper” project. And, hopefully, it can serve as a template that other people can follow. I’ll let you know how it goes.
So, what do you think? Is this the right model for CPAN development (and, also, Perl infrastructure development) moving forward? Would you be interested in joining either of these organisations? Do you have any tools that the Perl Tools Team could maintain for you?
The post GitHub Organisations appeared first on Perl Hacks.
I’ve mentioned before how much I enjoyed Olaf Alders’ talk, Whither Perl, at the Perl and Raku Conference in Toronto last month. I think it’s well worth spending forty minutes watching it. It triggered a few ideas that I’ll be writing about over the coming weeks and, today, I wanted to start by talking briefly about the idea of GitHub Organisations and introducing a couple of organisations that I’ve recently set up.
Olaf talks about GitHub Organisations as a way to ensure the continuity of your projects. And I think that’s a very important point. I’ve written a few times over recent years about the problems of getting fixes into CPAN modules when the maintainer has gone missing. This is also true of non-CPAN projects that members of the community might find useful. By setting up an organisation and inviting a few trusted people to join it, you can share the responsibility of looking after your projects. So if you lose interest or drift away, there’s a better chance that someone else will be able to take up the reins.
It’s actually a thought that I had before going to Toronto. A couple of months ago, I set up the Perl Tools Team organisation and transferred ownership of four of my repos.
There will, no doubt, be other repos that I’ll want to transfer over to this organisation in time. And, of course, if you have a tool that is used by the Perl community, I’ll be happy to add it to the list. Just get in touch and we can talk about it.
The other organisation (and I set this up just this morning) now owns all of the repos for my CPAN modules. I won’t be adding anybody else’s repos to this organisation, but if you send PRs to any of these projects (and I’m looking at you, Mohammad) then don’t be surprised if you get added to the organisation too! If you’ve watched my talk on GitHub Actions for Perl Development, then you might remember that I was developing a GitHub Workflow definition for releasing modules to CPAN. That’s still a work in progress, but now I’m thinking that I could add my PAUSE credentials to the GitHub secrets store for this organisation and the GitHub workflow could release the code to CPAN using my credentials without my input (but, obviously, I’m still considering the security implications of that – it would certainly only ever be available to me and a few trusted lieutenants).
This is still all very new and it will definitely develop in several directions over the coming months. But it feels like the a move in the right direction. After twenty years of CPAN releases, it feels like I’m turning my work into a “proper” project. And, hopefully, it can serve as a template that other people can follow. I’ll let you know how it goes.
So, what do you think? Is this the right model for CPAN development (and, also, Perl infrastructure development) moving forward? Would you be interested in joining either of these organisations? Do you have any tools that the Perl Tools Team could maintain for you?
The post GitHub Organisations appeared first on Perl Hacks.
At the start of this month, I was in Toronto for the Perl and Raku Conference. It was my first time ever at a Perl conference in North America and it’s been over twenty years since I spoke at a conference on that continent (it was an OSCON in San Diego).
I gave a talk about using GitHub Actions for Perl development (which you can watch on YouTube) and a lightning talk about my CPAN Dashboard project (which is run using GitHub Actions). I’ve had a few more sales of the book in the weeks following the conference, so I guess I persuaded a few more people to buy it.
I’ve written a blog post about my time at the conference in which I recommend a few talks that I saw while I was there. In particular, Olaf Anders’ talk Whither Perl struck a chord with me. It was a measured rumination on where Perl is currently and what the community should be thinking about in order to get themselves into a better place. It got me thinking about the community and my place in it. I don’t have any conclusions yet, but there’s certainly a blog post brewing that will, hopefully, be written sometime this week.
One thing at a time…
When I called this newsletter “One Thing at a Time”, it was really a message to myself. I’m far too good at thinking of business ideas, starting prototypes and never turning them into real income. I want to curb that instinct in myself and actually build something into a profitable business. I don’t mind if I end up with two or three (maybe more) small but profitable businesses, but I know that will only happen if I work on one of them at a time.
For example, here’s a sample of the things that are in my head at the moment (I’ve included links to some web sites that are very much works in progress - don’t be surprised if some of them make little sense in their current form):
Two t-shirt/clothing ideas that I think could really sell
A sequel to the GitHub Actions book which would be about GitHub Pages
A coffee-table book about the royal family
More marketing for GitHub Actions Essentials (Do people want webinars on the subject? Would they pay for them?)
A tool for Continuous Integration of Perl projects (written using GitHub Actions and GitHub Pages)
The book More Than Code (and associated consultancy) that I’ve discussed here before
A micro-publishing house that helps technical people get their knowledge published as ebooks
There are more, but eight items is a long enough list to be getting on with. I need to prioritise these ideas and work on one of them at a time.
So what criteria do I use to prioritise them? Well, honestly, I think it’s about making money. Which of them will people pay for? And speed. How quickly can I start making money from them?
The GitHub Pages book would be pretty simple to write. I have an outline already and could knock the book out in a few weeks. The Perl CI tool would be fun to work on, but I don’t think there’s much of a market for people to pay for it. The genealogy app is a bit more work to get it up and running and I’m not entirely sure how I’d monetise it. The royal family book would just be a lot of work. The publishing house is a nice little project - but it’s very dependent on how popular the individual books are.
I like More Than Code. And I have a lot of it written already. For those of you who weren’t around when I last discussed it - it’s basically a book that’s a mentor for people in a software development career. It talks them through all the bits of their job that aren’t actually writing code. So it covers stuff like being an efficient developer by knowing tools like your editor and your operating system better. There’s also a section about various software design and development processes and, at the end, it verges into “fluffier” areas like building a personal brand and getting known in the industry.
I haven’t decided which of these projects I’ll be concentrating on next. But I thought would be interesting to give you a bit of a peek into my head as I’m making up my mind.
What do you think? Are there any projects there that you’d like to see me take to completion? Which of them would you pay for?
I’ll let you know what I choose in my next newsletter.
Dave…
It’s been over twenty years since I spoke at a conference in North America. That was at OSCON in San Diego. I’ve actually never spoken at a YAPC, TPC or TPRC in North America. I have the standard European concern about being seen to encourage the USA’s bad behaviour by actually visiting it, so when I saw this year’s TPRC was in Canada, I thought that gave me the perfect opportunity to put that right.
So I proposed a talk which was accepted.
It was also the first time I’d been to any kind of conference since before the pandemic. My last conference was in Riga in 2019.
Despite Air Transat’s determination to prevent it from happening, my girlfriend and I made it to Toronto a few days before the conference started. It was her birthday, so we spent Sunday and Monday relaxing and getting to know Downtown Toronto. On Monday afternoon, we moved to the conference hotel and prepared to geek out.
One of the first people I spoke to at the conference on Tuesday morning was fellow Londoner Mohammad Anwar. As is the law (I don’t make the rules!) I was mildly rebuking him about the ridiculous amount of work he puts into the Perl community. I told him the story of a senior member of the community who, about ten years ago, said to me: “I don’t understand why you still make so much effort, Dave. You have your White Camel, don’t you?” I swear I didn’t know that Mohammad was about to be awarded the 2022 White Camel – but it gave me the opportunity to go up to him and say, “See, you can stop making such an effort now!” I hope he doesn’t really stop; but he should really take things a bit easier.
The next three days were a happy blur of geekery. As always at these conferences, there were too many talks that I wanted to see and, inevitably, I still have to catch up on some of them on YouTube (thanks to the dedicated video team for getting making them available so quickly).
There are a number of talks that I’d like more people to see. I think it would be a great use of your time to watch these videos:
I gave two talks – a lightning talk on CPAN Dashboard and a longer talk on GitHub Actions for Perl Development. After not giving a talk for four years, I felt a little rusty – but I think they went ok.
And then, after a seemingly-fleeting three days, it was all over and we all returned to our own countries. There’s another conference in Finland next month. Unfortunately, I’m unable to be there – and last week’s experience makes me regret that.
It was great to catch up with old friends and share our mutual interest in Perl. It was particularly great after four years without a conference to go to. I hope it’s not four years until I’m at another.
The post The Perl and Raku Conference, Toronto 2023 appeared first on Perl Hacks.
It’s been over twenty years since I spoke at a conference in North America. That was at OSCON in San Diego. I’ve actually never spoken at a YAPC, TPC or TPRC in North America. I have the standard European concern about being seen to encourage the USA’s bad behaviour by actually visiting it, so when I saw this year’s TPRC was in Canada, I thought that gave me the perfect opportunity to put that right.
So I proposed a talk which was accepted.
It was also the first time I’d been to any kind of conference since before the pandemic. My last conference was in Riga in 2019.
Despite Air Transat’s determination to prevent it from happening, my girlfriend and I made it to Toronto a few days before the conference started. It was her birthday, so we spent Sunday and Monday relaxing and getting to know Downtown Toronto. On Monday afternoon, we moved to the conference hotel and prepared to geek out.
One of the first people I spoke to at the conference on Tuesday morning was fellow Londoner Mohammad Anwar. As is the law (I don’t make the rules!) I was mildly rebuking him about the ridiculous amount of work he puts into the Perl community. I told him the story of a senior member of the community who, about ten years ago, said to me: “I don’t understand why you still make so much effort, Dave. You have your White Camel, don’t you?” I swear I didn’t know that Mohammad was about to be awarded the 2022 White Camel – but it gave me the opportunity to go up to him and say, “See, you can stop making such an effort now!” I hope he doesn’t really stop; but he should really take things a bit easier.
The next three days were a happy blur of geekery. As always at these conferences, there were too many talks that I wanted to see and, inevitably, I still have to catch up on some of them on YouTube (thanks to the dedicated video team for getting making them available so quickly).
There are a number of talks that I’d like more people to see. I think it would be a great use of your time to watch these videos:
I gave two talks – a lightning talk on CPAN Dashboard and a longer talk on GitHub Actions for Perl Development. After not giving a talk for four years, I felt a little rusty – but I think they went ok.
And then, after a seemingly-fleeting three days, it was all over and we all returned to our own countries. There’s another conference in Finland next month. Unfortunately, I’m unable to be there – and last week’s experience makes me regret that.
It was great to catch up with old friends and share our mutual interest in Perl. It was particularly great after four years without a conference to go to. I hope it’s not four years until I’m at another.
The post The Perl and Raku Conference, Toronto 2023 appeared first on Perl Hacks.
Hi all,
When I last wrote to you (which is longer ago than I would like it to have been), I mentioned a podcast I would be talking on. My episode was published yesterday, so I can now tell you all about it.
It’s an excellent example of serendipity. Gavin Henry is one of the presenters of Software Engineering Radio. He was looking for someone to appear on the podcast to talk about GitHub Actions when one of my newsletters plugging GitHub Actions Essentials appeared in his inbox. We exchanged a few emails and then spent a very pleasant hour or so talking about GitHub Actions back in April. The back-end team at SE Radio then took our conversation and turned it into a real podcast which is now available for you to listen to wherever you get your podcasts.
If you’re not really sure why you should care about GitHub Actions (and, more importantly, why you should buy a book about it) then I think our conversation does a good job of explaining what GitHub Actions is and how it can help you.
Other marketing efforts on the book continue to have a slow effect. I seem to be selling far more copies on LeanPub than on Amazon. I wonder if that’s my geeky audience having a preference for not using Amazon if possible?
Like the rest of the world, I’ve been experimenting with the new generation of Generative AI tools. Sometimes it feels like I’ve handed over most of the management of my life to GPT4. I’ve been using it to plan meals, exercise sessions and even sightseeing trips. It’s been really impressive.
But, as a geek, it’s coding where these tools really stand out for me. Over the last few years, I’ve become a reasonably competent Javascript programmer. But GPT4 has really increased my productivity. As an example, here’s my web page which links to the weekly status reports from the Perl Steering Council. The initial version built a static page each week when the new report was published. But now it loads the data from a JSON file and builds a table that can be sorted in various useful ways. None of that would have been beyond my Javascript abilities, but it would have taken me a few hours to get there. With a few carefully-chosen prompts, the new version was ready in about an hour. It wasn’t all plain-sailing - GPT4 consistently hallucinated the wrong name for a Font Awesome icon - but it was far easier than doing it myself.
I’ve also been experimenting with GitHub Copilot. I had put off trying it for months because I thought (wrongly, as it happens) that it only supported a small number of languages - that didn’t include Perl. But that’s not the case. Its Perl support seems to be pretty good. Ovid wrote a blog post which summarises his (positive) experience with Copilot and mine is very similar.
My “aha!” moment came when I decided to refactor some code and Copilot read my mind and produced the correct code without any prompts from me.
I was working on a web application that needed to switch between two display modes - one had basically no CSS and the other used CSS to make the output far prettier. The class had an attribute called “pretty” which determined which version to use. So the method that renders the output contains code like this:
my $css = $self->pretty ? $self->pretty_css : $self->standard_css;
And that $css variable was then interpolated into the output HTML. I decided to put that logic into a new method called “css()”. I just typed “sub css {“ and when I pressed the enter key, Copilot suggested a body for the method that was exactly what I wanted.
I was sold. I’m currently on the free trial, but once that’s up it’s going to be well worth the tenner or month it’s going to cost me.
I was at the AWS Summit in London yesterday. As you’d guess, generative AI was a big topic there as well. AWS already has a number of AI-based services and that number is only going to increase as time passes. They now have a Copilot competitor called CodeWhisperer which might be worth a look.
I always come away from AWS Summit enthused to get my stuff moved into the AWS cloud and this year was no exception.
A few years ago, I started writing a series of blog posts called Into the Cloud, where I was going to explain how I moved one of my personal projects onto AWS. I never completed that project (real jobs got in the way) but I think the time is right to get back to it. And I think I’ll have a good helper this time. I’ve already had a conversation with ChatGPT about the project and we’ve come up with a plan to complete the transition. I’ll let you know how it goes.
As you might be able to tell, I’m having a lot of fun writing books, experimenting with new toys and working on my own projects. But the mortgage still needs to be paid, so I’m in the market to take on a bit of work. If you or anyone you know can make use of my skills (Perl, Linux, web development, databases, GitHub Actions, and so on) then please get in touch.
Until next time,
Cheers,
Dave…
[This post might sound like I’m angry at people making it hard to make progress on some things. That’s not the case at all. I realise completely that people have limited time and they get to choose how they spend it. If people are too busy elsewhere or have moved on to other projects then that’s just how it is and we need to deal with that the best we can.]
Back in December 2020, I wrote a blog post about how I wanted to fix a long-standing problem with App::HTTPThis. I’m happy to report that two and a half years later, the problem has been fixed.
To summarise my previous blog post:
Now read on…
Both of my pull requests went unactioned for months. In the end, I decided to approach the Perl modules list to ask if I could get co-maintainer permission on App::HTTPThis (it looked like the original maintainer had lost interest – there hadn’t been a release since 2010). When I heard nothing back, I put the project to one side only occasionally returning to add a new comment on my two pull requests.
Then last month I decided I’d have another go at getting co-maintainer permissions. This time it worked and, earlier this week, I got an email from Neil Bowers saying that the previous maintainer had agreed to give me permission and that I could now upload the module to CPAN.
At that point, I realised that the release mechanism for the module was based on Dist::Zilla and also that fashions in the Dist::Zilla world had changed since App::HTTPThis had last been released. This meant that many of the plugins used had been deprecated and I had to do a bit of work to even release the module (which led to a small rant on Reddit).
But I managed to release version 0.003 to CPAN. Only to realise very soon afterwards that my Dist::Zilla-wrangling had missed an important fix. I fixed that and released version 0.004.
I then got an email from PAUSE telling me that I didn’t have permission to release the module. It seems this was a known PAUSE bug and Neil was able to apply a workaround for me. I was able to release version 0.004.
All of which means I now have a version of App::HTTPThis (and its included program, http_this) which supports default pages. You just have to type, for example:
$ http_this --autoindex .
And the current directory will be served over HTTP. Also (and this is the important bit!) if you have a file called “index.html” then that will be used instead of the server displaying a directory listing. It’s a tiny improvement, but one that will be very useful to me. And well worth the two and a half years I’ve invested in getting it released.
So why do I say that the mission is only “almost” completed? Well, there’s still that outstanding pull request on Plack::App::Directory. If that ever gets applied, I’ll remove Plack::App::DirectoryIndex from App::HTTPThis (and mark it as deprecated on CPAN).
This is, of course, a supremely unimportant fix in the grand scheme of things. But I think it illustrates an important issue that the Perl community should be thinking about. The community is shrinking. Or, at least, the part of the community that supports CPAN modules and runs our important infrastructure is shrinking. CPAN is full of modules that are now unsupported. I’ve lost count of the number of bugs I’ve reported or patches I’ve supplied that have been ignored because the module author is no longer interested. In some cases, I’ve taken over the module myself, but that’s not a scalable solution. Honestly, I don’t know what it is. But I do think that relying on CPAN modules has got harder over the last few years. And it’s not going to get easier any time soon.
The post Mission (Almost) Accomplished appeared first on Perl Hacks.
[This post might sound like I’m angry at people making it hard to make progress on some things. That’s not the case at all. I realise completely that people have limited time and they get to choose how they spend it. If people are too busy elsewhere or have moved on to other projects then that’s just how it is and we need to deal with that the best we can.]
Back in December 2020, I wrote a blog post about how I wanted to fix a long-standing problem with App::HTTPThis. I’m happy to report that two and a half years later, the problem has been fixed.
To summarise my previous blog post:
Now read on…
Both of my pull requests went unactioned for months. In the end, I decided to approach the Perl modules list to ask if I could get co-maintainer permission on App::HTTPThis (it looked like the original maintainer had lost interest – there hadn’t been a release since 2010). When I heard nothing back, I put the project to one side only occasionally returning to add a new comment on my two pull requests.
Then last month I decided I’d have another go at getting co-maintainer permissions. This time it worked and, earlier this week, I got an email from Neil Bowers saying that the previous maintainer had agreed to give me permission and that I could now upload the module to CPAN.
At that point, I realised that the release mechanism for the module was based on Dist::Zilla and also that fashions in the Dist::Zilla world had changed since App::HTTPThis had last been released. This meant that many of the plugins used had been deprecated and I had to do a bit of work to even release the module (which led to a small rant on Reddit).
But I managed to release version 0.003 to CPAN. Only to realise very soon afterwards that my Dist::Zilla-wrangling had missed an important fix. I fixed that and released version 0.004.
I then got an email from PAUSE telling me that I didn’t have permission to release the module. It seems this was a known PAUSE bug and Neil was able to apply a workaround for me. I was able to release version 0.004.
All of which means I now have a version of App::HTTPThis (and its included program, http_this) which supports default pages. You just have to type, for example:
$ http_this --autoindex .
And the current directory will be served over HTTP. Also (and this is the important bit!) if you have a file called “index.html” then that will be used instead of the server displaying a directory listing. It’s a tiny improvement, but one that will be very useful to me. And well worth the two and a half years I’ve invested in getting it released.
So why do I say that the mission is only “almost” completed? Well, there’s still that outstanding pull request on Plack::App::Directory. If that ever gets applied, I’ll remove Plack::App::DirectoryIndex from App::HTTPThis (and mark it as deprecated on CPAN).
This is, of course, a supremely unimportant fix in the grand scheme of things. But I think it illustrates an important issue that the Perl community should be thinking about. The community is shrinking. Or, at least, the part of the community that supports CPAN modules and runs our important infrastructure is shrinking. CPAN is full of modules that are now unsupported. I’ve lost count of the number of bugs I’ve reported or patches I’ve supplied that have been ignored because the module author is no longer interested. In some cases, I’ve taken over the module myself, but that’s not a scalable solution. Honestly, I don’t know what it is. But I do think that relying on CPAN modules has got harder over the last few years. And it’s not going to get easier any time soon.
The post Mission (Almost) Accomplished appeared first on Perl Hacks.
Her Majesty has, of course, seen changes in many areas of society in the seventy years of her reign. But here, we’re most interested in the line of succession. So we thought it would be interesting to look at the line of succession on the day that she took the throne and see what had happened to the people who were at the top of the line of succession on that day. It’s a very different list to today’s.
I think that’s an interesting list for a few reasons:
So what do you think? Was the 1952 list a surprise to you? Did you expect it to be as different as it is from the current list?
Originally published at https://blog.lineofsuccession.co.uk on February 7, 2022.
Seventy Years of Change — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…
I released GitHub Actions Essentials into the world yesterday. The link will take you to a website which has links to all the places where you can buy it. Currently, that’s Amazon and LeanPub, but I’m open to suggestions of other places where you’d expect to buy it.
Currently, it’s ebook-only, but I’m looking into options for making it available in the physical world as well. Let me know if you’d find that useful.
This feels like the culmination of many weeks of work - researching the subject, writing the book, editing the book, wrangling it into a format that can be sold through various outlets. But, in reality, the real work starts now.
I now need to sell it.
I think I’ve mentioned before that I’m not a natural marketer (is it “marketer” or “marketeer”?) I would love to live in a world where I mention my new book on Twitter, Facebook and LinkedIn and suddenly the sales start flooding in. But, sadly, it’s not like that at all. There’s a lot more work that needs to be done. I did the easy tweeting and stuff like that yesterday. This newsletter is the next step in the campaign. This is a relatively easy step. This feels a bit like talking to a group of friends. I hope you don’t feel like you’re being marketed to - even though you are.
Buy my book! It’s great!!
But that’s not how marketing is supposed to go these days. Hey, I’ve read Seth Godin - I know how this stuff works. Let’s try again.
GitHub Actions will make your life easier by automating a lot of the software engineering process
GitHub Actions Essentials will show you how to use GitHub Actions in order to make your life easier
Buy my book! It’s great!!
See! That’s better, isn’t it? My book solves problems and makes your life better.
Then there are the bits of the process that a traditional publisher would do for me. Sending out review copies to people (and, later, chasing for reviews), paying for adverts in the right places (knowing where “the right places” are). I’ll need to do all of that myself. (Hint - if you’re able to help me get a review copy to someone who will write a relatively high-profile review, then please get in touch.)
A couple of things have serendipitously fallen into my lap. LeanPub has started doing a series of “book launch videos” where they interview the authors of recently-published books. Overnight I got an invitation to take part in one of those - so I’ll be interviewed next Monday for that. Also, when my last newsletter went out, one of the subscribers was looking for someone to be interviewed about GitHub Actions for their (pretty well-known) software engineering podcast. We’re recording the conversation next week, so I’ll be able to share more details in the next newsletter.
Another important marketing tip is to use any existing audience that you have. Some of you will know me from my work in the Perl community. The Perl community has annual conferences in both Europe and North America and I’ve proposed a talk on “GitHub Actions for Perl Development” to both of them. Yesterday I heard that it had been accepted for the conference in Toronto in July, so it looks very much like I’ll be speaking there. This will be the first time I’ve spoken at a conference in North America since OSCON was in San Diego (over twenty years ago).
It’s possible that writing this newsletter is slowly turning into a marketing avoidance tactic. So I should probably stop now and turn my attention to some other potential customers. I’ll be back with more updates in a week or two. Oh, but before I go, I wanted to include an extract for you (step 1: provide useful content). So, below is “Appendix B: List of Useful GitHub Actions and Integrations”. Hopefully, this will give you a brief flavour of the kinds of things that you can do with GitHub Actions (thereby giving you a reason to buy the book - this marketing lark is addictive!)
Speak to you soon,
Dave…
This appendix provides a curated list of useful GitHub Actions and integrations that can enhance your workflows and improve your development process. While this list is not exhaustive, it should help you discover the potential of GitHub Actions and encourage you to explore the GitHub Actions Marketplace for more actions.
actions/checkout: This action checks out your repository so your workflow can access its contents. It is one of the most commonly used actions in GitHub Actions workflows.
Repository: https://github.com/actions/checkout
actions/setup-node: Sets up a Node.js environment on the runner, allowing you to run Node.js scripts and tools in your workflow.
Repository: https://github.com/actions/setup-node
actions/setup-python: Sets up a Python environment on the runner, allowing you to run Python scripts and tools in your workflow.
Repository: https://github.com/actions/setup-python
actions/cache: Caches dependencies and build outputs to improve workflow execution times.
Repository: https://github.com/actions/cache
actions/upload-artifact and actions/download-artifact: Uploads build artifacts from a job and downloads them for use in subsequent jobs.
actions/create-release: Creates a new release on GitHub and uploads assets to the release.
Repository: https://github.com/actions/create-release
actions/github-script: Allows you to write inline scripts that interact with the GitHub API and other GitHub Actions features using Octokit and the actions-toolkit libraries.
Repository: https://github.com/actions/github-script
codecov/codecov-action: Uploads your code coverage reports to Codecov, a popular code coverage analysis and reporting tool.
Repository: https://github.com/codecov/codecov-action
deployments/ftp-deploy: Deploys your repository to a remote server via FTP or SFTP.
Repository: https://github.com/deployments/ftp-deploy
jakejarvis/lighthouse-action: Runs Google Lighthouse audits on your web application and generates reports.
Repository: https://github.com/jakejarvis/lighthouse-action
peter-evans/create-pull-request: Automates the creation of pull requests from within your GitHub Actions workflows.
semantic-release/semantic-release: Fully automated version management and package publishing based on semantic versioning rules.
snyk/actions: Scans your dependencies for vulnerabilities using Snyk, a popular open-source security platform.
Repository: https://github.com/snyk/actions
SonarCloud/github-action: Integrates SonarCloud, a cloud-based code quality and security analysis platform, into your GitHub Actions workflows.
Repository: https://github.com/SonarCloud/github-action
stale/stale: Automatically marks issues and pull requests as stale after a period of inactivity, and eventually closes them if no further activity occurs.
Repository: https://github.com/stale/stale
Remember to explore the GitHub Actions Marketplace for additional actions and integrations that may suit your specific needs.
Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.
Douglas Adams – The Hitchhiker’s Guide to the Galaxy
I don’t still wear a digital watch, but I do like other things that are almost as unhip. In particular, I pine for the time about twenty years ago when web feeds looked like they were about to take over the world. Everyone had their favourite feed reader (I still miss Google Reader) and pretty much any useful web site would produce one or more web feeds that you could subscribe to and follow through your feed reader. For a few years, it was almost unthinkable to produce a web site without publishing a feed which included the changes to the site’s content.
Then, at some point, that changed. It wasn’t that web feeds vanished overnight. They still exist for many sites. But they are no longer ubiquitous. You can’t guarantee they’ll exist for every site you’re interested in. I remember people saying that social media would replace them. I was never convinced by that argument but, interestingly, one of the first times I noticed them vanishing was when Twitter removed their web feed of a user’s posts. They wanted people to use their AP instead (so I wrote twitter-json2atom that turned their API’s JSON into an Atom feed – I suspect it no longer works). Honestly, I think the main reason for the fall in popularity of web feeds was that people wanted you to read their content on their web sites where the interesting content was surrounded by uninteresting adverts.
But, as I said, not all web feeds vanished. There are still plenty of them out there (often, I expect because the sites’ owners don’t realise they’re there or don’t know how to turn them off). And that means the web feed-driven technologies of the early 2000s can still be useful.
One such piece of technology is the feed aggregator. I remember these being very popular. You would create a web site and configure it with a list of web feeds that you were interested in. The site would be driven by a piece of software that every few hours would poll the web feeds in the configuration and use the information it found to create a) a web page made up of information from the feeds and b) another feed that contained all of the information from the source feeds. The most popular software for building these sites was called Planet Planet and was written in Python (it seems to have vanished sometime in the last twenty years, otherwise I would link to it). When I wrote a Perl version, I called it (for reasons I now regret) Perlanet.
I still use Perlanet to build planet sites. And they’re all listed at The Planetarium. Recently, I’ve started hosting all my planets on GitHub Pages, using GitHub Actions to rebuild the sites periodically. I thought that maybe other people might be old-skool like me and might want to build their own planets – so in the rest of this post I’ll explain how to do that, using Planet Perl as an example.
The first thing you’ll need is a GitHub account and a repo to store the code for your planet. I’m going to assume you know how to set those up (in the interest of keeping this tutorial short). You only actually need two files to create a planet – a config file and a template for the web site.
Here’s part of the config for Planet Perl:
title: Planet Perl description: There's More Than One Way To Aggregate It url: https://perl.theplanetarium.org/ author: name: Dave Cross email: dave@theplanetarium.org twitter: davorg entries: 75 entries_per_feed: 5 opml_file: docs/opml.xml page: file: docs/index.html template: index.tt feed: file: docs/atom.xml format: Atom google_ga: G-HD966GMRYP cutoff_duration: months: 1 feeds: - feed: https://www.perl.com/article/index.xml title: perl.com web: https://perl.com/ - feed: https://news.perlfoundation.org/atom.xml title: Perl Foundation News web: https://news.perlfoundation.org/
I’ve tried to make it self-explanatory. At the top, there are various config options for the output (the web page and the aggregated feed) and, below, are details of the feeds that you want to aggregate. Let’s look at the output options first.
Then we have the section of the config file that defines the feeds that we are going to aggregate. Each feed has three data items:
And that’s all you need for the config file. Create that, put it in a file called “perlanetrc” and add it to your repo.
The other file you need is the template for the HTML page. This is usually called “index.tt”. The one I use for Planet Perl is rather complicated (there are all sorts of Javascript tricks in it). The one I use for Planet Davorg is far simpler – and should work well with the config file above. I suggest going with that initially and editing it once you’ve got everything else working.
I said those are the only two files you need. And that’s true. But the site you create will be rather ugly. My default web page uses Bootstrap for CSS, but you’ll probably want to add your own CSS to tweak the way it looks – along with, perhaps, some Javascript and some images. All of the files that you need to make your site work should be added to the /docs directory in your repo.
Having got to this stage, we can test your web site. Well, we’ll need to install Perlanet first. There are two ways to do this. You can either install it from CPAN along with all of its (many) dependencies – using “cpan Perlanet” or there’s a Docker image that you can use. Either way, once you have the software installed, running it is as simple as running “perlanet”. That will trundle along for a while and, when it has finished, you’ll find new files called “index.html” and “atom.xml” in the /docs directory. My favourite way to test the output locally is to use App::HTTPThis. Having installed this program, you can just run “http_this docs” from the repo’s main directory and then visit http://localhost:7007/index.html to see the site that was produced (or http://localhost:7007/atom.xml to see the feed.
You now have a system to build your new planet. You could run that on a server that’s connected to the internet and set up a cronjob to regenerate the file every few hours. And that’s how I used to run all of my planets. But, recently, I’ve moved to running them on GitHub Pages instead. And that’s what we’ll look at next.
There are two parts to this. We need to configure our repo to have a GitHub Pages site associated with it and we also need to configure GitHub Actions to rebuild the site every few hours. Let’s take those two in turn.
Turning on GitHub Pages is simple enough. Just go to the “Pages” section in your repo’s settings. Choose “GitHub Actions” as the deployment source and tick the box marked “Enforce HTTPS”. Later on, you can look at setting up a custom domain for your site but, for now, let’s stick with the default URL which will be https://<github_username>.github.io/<repo_name>. Nothing will appear yet, as we need to set up GitHub Actions next.
Setting up a GitHub Action workflow is as simple as adding a YAML file to the /.github/workflows directory in your repo. You’ll obviously have to create that directory first. Here’s the workflow definition for Planet Perl (it’s in a file called “buildsite.yml”, but that name isn’t important).
name: Generate web page on: push: branches: '*' schedule: - cron: '37 */4 * * *' workflow_dispatch: jobs: build: runs-on: ubuntu-latest container: davorg/perl-perlanet:latest steps: - name: Checkout uses: actions/checkout@v3 - name: Create pages run: | mkdir -p docs perlanet > perlanet.log 2>&1 - name: Commit new page if: github.repository == 'davorg/planetperl' run: | git config --global --add safe.directory /__w/planetperl/planetperl GIT_STATUS=$(git status --porcelain) echo $GIT_STATUS git config user.name github-actions[bot] git config user.email 41898282+github-actions[bot]@users.noreply.github.com git add docs/ if [ "$GIT_STATUS" != "" ]; then git commit -m "Automated Web page generation"; fi if [ "$GIT_STATUS" != "" ]; then git push; fi - name: Archive perlanet logs uses: actions/upload-artifact@v3 with: name: perlanet.log path: ./perlanet.log retention-days: 3 - name: Update pages artifact uses: actions/upload-pages-artifact@v1 with: path: docs/ deploy: needs: build permissions: pages: write id-token: write environment: name: github-pages url: ${{ steps.deployment.outputs.page_url }} runs-on: ubuntu-latest steps: - name: Deploy to GitHub Pages id: deployment uses: actions/deploy-pages@v2
The first section of the file defines the events that will trigger this workflow. I have defined three triggers:
Following that, we define the jobs that need to be run and the steps that make up those jobs. We have two jobs – one that builds the new version of the site and one that deploys that new site to GitHub Pages. Remember how I mentioned earlier that there is a Perlanet container on the Docker Hub? Well, you’ll see that the build job runs on that container. This is because pulling a container from the Docker Hub is faster than using a standard Ubuntu container and installing Perlanet.
The steps in these jobs should be pretty self-explanatory. Basically, we check out the repo, run “perlanet” to build the site and then deploy the contents of the /docs directory to the GitHub Pages server.
Once you’ve created this file and added it to your repo, you’ll see details of this workflow on the “Actions” tab in your repo. And whenever you push a change or when a scheduled run takes place (or you press the manual run button) you’ll see logs for the run and (hopefully) your web site will update to contain the latest data.
I reckon you can get a new planet up and running in about half an hour. Oh, and if you label your repo with the topic “perlanet”, then it will automatically be added to The Planetarium.
So, what are you waiting for? What planet would you like to build?
The post Building Planets with Perlanet and GitHub appeared first on Perl Hacks.
Far out in the uncharted backwaters of the unfashionable end of the western spiral arm of the Galaxy lies a small unregarded yellow sun. Orbiting this at a distance of roughly ninety-two million miles is an utterly insignificant little blue green planet whose ape-descended life forms are so amazingly primitive that they still think digital watches are a pretty neat idea.
Douglas Adams – The Hitchhiker’s Guide to the Galaxy
I don’t still wear a digital watch, but I do like other things that are almost as unhip. In particular, I pine for the time about twenty years ago when web feeds looked like they were about to take over the world. Everyone had their favourite feed reader (I still miss Google Reader) and pretty much any useful web site would produce one or more web feeds that you could subscribe to and follow through your feed reader. For a few years, it was almost unthinkable to produce a web site without publishing a feed which included the changes to the site’s content.
Then, at some point, that changed. It wasn’t that web feeds vanished overnight. They still exist for many sites. But they are no longer ubiquitous. You can’t guarantee they’ll exist for every site you’re interested in. I remember people saying that social media would replace them. I was never convinced by that argument but, interestingly, one of the first times I noticed them vanishing was when Twitter removed their web feed of a user’s posts. They wanted people to use their AP instead (so I wrote twitter-json2atom that turned their API’s JSON into an Atom feed – I suspect it no longer works). Honestly, I think the main reason for the fall in popularity of web feeds was that people wanted you to read their content on their web sites where the interesting content was surrounded by uninteresting adverts.
But, as I said, not all web feeds vanished. There are still plenty of them out there (often, I expect because the sites’ owners don’t realise they’re there or don’t know how to turn them off). And that means the web feed-driven technologies of the early 2000s can still be useful.
One such piece of technology is the feed aggregator. I remember these being very popular. You would create a web site and configure it with a list of web feeds that you were interested in. The site would be driven by a piece of software that every few hours would poll the web feeds in the configuration and use the information it found to create a) a web page made up of information from the feeds and b) another feed that contained all of the information from the source feeds. The most popular software for building these sites was called Planet Planet and was written in Python (it seems to have vanished sometime in the last twenty years, otherwise I would link to it). When I wrote a Perl version, I called it (for reasons I now regret) Perlanet.
I still use Perlanet to build planet sites. And they’re all listed at The Planetarium. Recently, I’ve started hosting all my planets on GitHub Pages, using GitHub Actions to rebuild the sites periodically. I thought that maybe other people might be old-skool like me and might want to build their own planets – so in the rest of this post I’ll explain how to do that, using Planet Perl as an example.
The first thing you’ll need is a GitHub account and a repo to store the code for your planet. I’m going to assume you know how to set those up (in the interest of keeping this tutorial short). You only actually need two files to create a planet – a config file and a template for the web site.
Here’s part of the config for Planet Perl:
title: Planet Perl
description: There's More Than One Way To Aggregate It
url: https://perl.theplanetarium.org/
author:
name: Dave Cross
email: dave@theplanetarium.org
twitter: davorg
entries: 75
entries_per_feed: 5
opml_file: docs/opml.xml
page:
file: docs/index.html
template: index.tt
feed:
file: docs/atom.xml
format: Atom
google_ga: G-HD966GMRYP
cutoff_duration:
months: 1
feeds:
- feed: https://www.perl.com/article/index.xml
title: perl.com
web: https://perl.com/
- feed: https://news.perlfoundation.org/atom.xml
title: Perl Foundation News
web: https://news.perlfoundation.org/
I’ve tried to make it self-explanatory. At the top, there are various config options for the output (the web page and the aggregated feed) and, below, are details of the feeds that you want to aggregate. Let’s look at the output options first.
Then we have the section of the config file that defines the feeds that we are going to aggregate. Each feed has three data items:
And that’s all you need for the config file. Create that, put it in a file called “perlanetrc” and add it to your repo.
The other file you need is the template for the HTML page. This is usually called “index.tt”. The one I use for Planet Perl is rather complicated (there are all sorts of Javascript tricks in it). The one I use for Planet Davorg is far simpler – and should work well with the config file above. I suggest going with that initially and editing it once you’ve got everything else working.
I said those are the only two files you need. And that’s true. But the site you create will be rather ugly. My default web page uses Bootstrap for CSS, but you’ll probably want to add your own CSS to tweak the way it looks – along with, perhaps, some Javascript and some images. All of the files that you need to make your site work should be added to the /docs directory in your repo.
Having got to this stage, we can test your web site. Well, we’ll need to install Perlanet first. There are two ways to do this. You can either install it from CPAN along with all of its (many) dependencies – using “cpan Perlanet” or there’s a Docker image that you can use. Either way, once you have the software installed, running it is as simple as running “perlanet”. That will trundle along for a while and, when it has finished, you’ll find new files called “index.html” and “atom.xml” in the /docs directory. My favourite way to test the output locally is to use App::HTTPThis. Having installed this program, you can just run “http_this docs” from the repo’s main directory and then visit http://localhost:7007/index.html to see the site that was produced (or http://localhost:7007/atom.xml to see the feed.
You now have a system to build your new planet. You could run that on a server that’s connected to the internet and set up a cronjob to regenerate the file every few hours. And that’s how I used to run all of my planets. But, recently, I’ve moved to running them on GitHub Pages instead. And that’s what we’ll look at next.
There are two parts to this. We need to configure our repo to have a GitHub Pages site associated with it and we also need to configure GitHub Actions to rebuild the site every few hours. Let’s take those two in turn.
Turning on GitHub Pages is simple enough. Just go to the “Pages” section in your repo’s settings. Choose “GitHub Actions” as the deployment source and tick the box marked “Enforce HTTPS”. Later on, you can look at setting up a custom domain for your site but, for now, let’s stick with the default URL which will be https://.github.io/. Nothing will appear yet, as we need to set up GitHub Actions next.
Setting up a GitHub Action workflow is as simple as adding a YAML file to the /.github/workflows directory in your repo. You’ll obviously have to create that directory first. Here’s the workflow definition for Planet Perl (it’s in a file called “buildsite.yml”, but that name isn’t important).
name: Generate web page
on:
push:
branches: '*'
schedule:
- cron: '37 */4 * * *'
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
container: davorg/perl-perlanet:latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Create pages
run: |
mkdir -p docs
perlanet > perlanet.log 2>&1
- name: Commit new page
if: github.repository == 'davorg/planetperl'
run: |
git config --global --add safe.directory /__w/planetperl/planetperl
GIT_STATUS=$(git status --porcelain)
echo $GIT_STATUS
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
git add docs/
if ["$GIT_STATUS" != ""]; then git commit -m "Automated Web page generation"; fi
if ["$GIT_STATUS" != ""]; then git push; fi
- name: Archive perlanet logs
uses: actions/upload-artifact@v3
with:
name: perlanet.log
path: ./perlanet.log
retention-days: 3
- name: Update pages artifact
uses: actions/upload-pages-artifact@v1
with:
path: docs/
deploy:
needs: build
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v2
The first section of the file defines the events that will trigger this workflow. I have defined three triggers:
Following that, we define the jobs that need to be run and the steps that make up those jobs. We have two jobs – one that builds the new version of the site and one that deploys that new site to GitHub Pages. Remember how I mentioned earlier that there is a Perlanet container on the Docker Hub? Well, you’ll see that the build job runs on that container. This is because pulling a container from the Docker Hub is faster than using a standard Ubuntu container and installing Perlanet.
The steps in these jobs should be pretty self-explanatory. Basically, we check out the repo, run “perlanet” to build the site and then deploy the contents of the /docs directory to the GitHub Pages server.
Once you’ve created this file and added it to your repo, you’ll see details of this workflow on the “Actions” tab in your repo. And whenever you push a change or when a scheduled run takes place (or you press the manual run button) you’ll see logs for the run and (hopefully) your web site will update to contain the latest data.
I reckon you can get a new planet up and running in about half an hour. Oh, and if you label your repo with the topic “perlanet”, then it will automatically be added to The Planetarium.
So, what are you waiting for? What planet would you like to build?
The post Building Planets with Perlanet and GitHub appeared first on Perl Hacks.
If you’ve been following my blog posts on dev.to, then you’ll know that I’ve been doing a lot of work with GitHub Actions over the last year or so. If you haven’t seen what I’ve been writing, here are a few links to catch you up:
I think that GitHub Actions is a useful feature which allows you to automate large parts of the software development process. Every day that I use it, I find another interesting thing about it and I’m sure I’ve only scratched the surface.
All of which means that over the last few weeks, I’ve found myself writing about GitHub Actions. And it’s almost ready to be published. The front cover is below and I’ve been working on a website about the book (it’s very much a work in progress - feel free to take a look but don’t be surprised if things don’t work and please don’t share the link with anyone else yet).
One thing that the web page does have already is a table of contents - so you can see whether or not you’ll be interested in the book. And, below, I present the first preview content from the book. This is an extract from section 10.3 - Automating Project Management and Collaboration.
Automating project management and collaboration tasks can significantly improve the efficiency of your development process and help your team stay focused on delivering high-quality code. GitHub Actions provides a flexible platform for creating custom workflows to automate various aspects of your project management and collaboration efforts.
In this section, we will discuss several examples of how you can leverage GitHub Actions to automate project management and collaboration tasks
Issues and pull requests are at the core of GitHub's collaborative features, allowing team members to report bugs, suggest enhancements, and submit code changes. By automating their management with GitHub Actions, you can save time, improve organization, and ensure consistency in your project.
Here are some key aspects of automating issue and pull request management with GitHub Actions:
Labelling: Automatically apply labels to new issues and pull requests based on predefined criteria. For example, you can label issues as "bug" or "enhancement" based on their description or use specific labels for pull requests targeting particular branches. This helps categorize and prioritize tasks within your project.
Assignment: Assign issues and pull requests to specific team members or groups based on predefined rules. This ensures that the right person is responsible for addressing each task and helps distribute workload evenly across your team.
Triage: Automatically move issues and pull requests through different stages of your development process. For example, you can create a workflow that automatically moves a pull request to a "review" stage when it's ready for review, and then to a "testing" stage when it's been approved.
Notifications: Send custom notifications to team members, Slack channels, or email addresses when specific events occur. This can help keep your team informed about the progress of issues and pull requests, and ensure that everyone is on the same page.
Automated checks: Implement automated checks and validations for pull requests to ensure that they meet certain quality standards before they can be merged. For example, you can enforce that all pull requests pass your CI pipeline or meet specific code coverage thresholds.
Merging: Automate the process of merging pull requests once they meet certain criteria, such as passing all required checks or receiving a specific number of approvals. This can help streamline your development process and ensure that code changes are merged promptly and consistently.
To get started with automating issue and pull request management, explore the available GitHub Actions in the marketplace that are designed for these purposes. You can also create custom workflows tailored to your project's specific needs. By implementing automation in your issue and pull request management, you'll be able to focus more on the actual development work and maintain a more organized, efficient, and collaborative project environment.
GitHub Project Boards provide a visual representation of your project's progress, allowing you to manage tasks, prioritize work, and track milestones. Integrating GitHub Actions with Project Boards can streamline your project management and help you maintain an up-to-date view of your project's status.
Here are some key aspects of integrating GitHub Actions with Project Boards:
Automatic Card Creation: Automatically create cards on your Project Board when new issues or pull requests are opened. This ensures that all tasks are tracked in a centralized location and allows team members to get an overview of the work that needs to be done.
Card Movement: Automate the movement of cards between different columns on your Project Board based on specific triggers or events. For example, when a pull request is approved, you can automatically move its corresponding card to a "Ready for Merge" column. This helps maintain an accurate representation of your project's progress and minimizes manual work for your team.
Card Assignment: Assign cards to team members automatically based on predefined rules or conditions. This can help distribute the workload more evenly and ensure that the right person is responsible for each task.
Updating Card Details: Automatically update card details, such as labels, assignees, or due dates, based on changes in the associated issue or pull request. This keeps your Project Board up-to-date and ensures that all relevant information is easily accessible.
Project Board Notifications: Send custom notifications to your team when specific events occur on your Project Board, such as when a card is moved to a different column or when a due date is approaching. This can help keep your team informed and ensure that everyone is aware of important updates or deadlines.
To integrate GitHub Actions with your Project Boards, you'll need to create custom workflows that interact with the GitHub API to perform actions related to Project Boards. Explore the available actions in the GitHub Actions Marketplace for managing Project Boards or create your own custom actions tailored to your project's needs.
By integrating GitHub Actions with your Project Boards, you can automate and streamline your project management processes, leading to increased efficiency and better collaboration among team members.
In many software projects, collaboration extends beyond your immediate team to include external teams or third-party services. Integrating GitHub Actions with these external resources can facilitate seamless collaboration, streamline communication, and ensure that all parties stay informed and in sync.
Here are some key aspects of collaborating with external teams and services using GitHub Actions:
Interacting with External Repositories: Set up workflows that interact with external repositories, such as creating pull requests, opening issues, or updating code in a forked repository. This can be particularly helpful when working with open-source projects or collaborating with other organizations on shared initiatives.
Third-Party Service Integration: Integrate GitHub Actions with popular third-party services such as Jira, Trello, Slack, or Microsoft Teams to automate various project management, communication, and collaboration tasks. For example, you can create a workflow that posts a message to a specific Slack channel when a new pull request is opened or synchronize GitHub issues with Jira tickets.
Shared Workflows and Actions: Share workflows and actions across multiple repositories or organizations. This allows you to establish best practices and maintain consistency across your projects. You can also leverage GitHub's reusable workflows feature to minimize duplication of effort and streamline the setup process for new projects.
Access Control and Permissions: Configure access controls and permissions for your GitHub Actions workflows to ensure that only authorized users can perform specific actions or access sensitive information. This is particularly important when working with external collaborators, as it helps maintain the security and integrity of your codebase.
Collaboration on Custom Actions: Encourage collaboration on the development of custom GitHub Actions by making the source code available in a public repository. This allows external contributors to submit improvements, report issues, or suggest new features, fostering a community-driven approach to action development.
To successfully collaborate with external teams and services using GitHub Actions, it's essential to plan and implement appropriate workflows, access controls, and integrations. This will enable your team to work efficiently with external collaborators, harness the power of third-party services, and maintain the security and integrity of your projects.
Automating code review and feedback processes using GitHub Actions can significantly improve the overall quality of your codebase and streamline collaboration among team members. By incorporating automated checks and reviews, you can ensure that your project adheres to established coding standards and best practices while minimizing human errors and oversight.
Here are some key aspects of automating code review and feedback using GitHub Actions:
Linting and Static Code Analysis: Integrate linters and static code analysis tools in your workflows to automatically check for syntax errors, code style violations, and other issues. These tools can provide immediate feedback on pull requests, ensuring that your codebase remains clean and maintainable. Popular tools include ESLint for JavaScript, Pylint for Python, and RuboCop for Ruby.
Automated Testing: Configure your workflows to run automated tests on every pull request or commit. This helps identify potential issues early in the development process and ensures that new changes do not introduce regressions. You can also use GitHub Actions to run tests in parallel or across multiple environments, further increasing the reliability and robustness of your codebase.
Code Review Automation: Use GitHub Actions to automate various aspects of the code review process, such as automatically assigning reviewers, enforcing review policies, or checking for compliance with specific guidelines. This can help streamline the review process and ensure that all code changes are thoroughly vetted before being merged into the main branch.
Automated Feedback: Integrate GitHub Actions with communication platforms like Slack or Microsoft Teams to provide real-time feedback on code changes. For example, you can create a workflow that sends a message to a specific channel whenever a new pull request is opened or when automated tests fail. This helps keep your team informed and encourages prompt action on issues.
Performance and Security Checks: Use GitHub Actions to automatically analyze your code for performance bottlenecks, security vulnerabilities, and other potential issues. Tools like SonarQube or Snyk can help you identify and address these concerns early in the development process, ensuring that your code remains secure and performant.
By automating code review and feedback processes using GitHub Actions, you can establish a more efficient and effective collaboration environment for your team. This, in turn, leads to higher quality code, fewer defects, and faster development cycles, ultimately resulting in a more successful and robust software project.
Effective documentation and knowledge management are critical to the success of any software project. They ensure that all team members have access to the information they need to understand, contribute to, and maintain the codebase. GitHub Actions can help automate and streamline various aspects of documentation and knowledge management, making it easier for your team to stay informed and up-to-date.
Here are some key strategies for streamlining documentation and knowledge management using GitHub Actions:
Automated Documentation Generation: Use GitHub Actions to automatically generate and update project documentation based on code comments, markdown files, or other sources. Tools like JSDoc, Sphinx, and Jekyll can help you create comprehensive and well-structured documentation with minimal effort. By integrating these tools into your workflow, you can ensure that your documentation remains current and accurate as your codebase evolves.
Documentation Linting and Validation: Validate your documentation for syntax, structure, and consistency using GitHub Actions. Tools like markdownlint, textlint, or reStructuredText linters can help you enforce documentation standards and best practices. By automatically checking documentation in pull requests or commits, you can maintain high-quality documentation that is easy to understand and navigate.
Automated Knowledge Base Updates: Integrate GitHub Actions with your knowledge management system or wiki to automatically update documentation and other resources when changes are made to your codebase. For example, you could create a workflow that updates a Confluence page or a GitHub Wiki whenever a new feature is added or an existing feature is modified. This ensures that your team always has access to the most up-to-date information.
Change Tracking and Notification: Use GitHub Actions to monitor changes to documentation and other knowledge resources, and notify team members of relevant updates. This can help keep your team informed about important changes and encourage collaboration and knowledge sharing. Integrating GitHub Actions with communication platforms like Slack or Microsoft Teams can facilitate real-time notifications and discussions around documentation updates.
Automating Release Notes: Generate and publish release notes automatically using GitHub Actions. By extracting relevant information from commit messages, pull requests, and issue tracker updates, you can create detailed and accurate release notes that help users understand the changes and improvements in each new version of your software.
By leveraging GitHub Actions to automate and streamline documentation and knowledge management processes, you can foster a more informed and collaborative development environment. This leads to better decision-making, more efficient workflows, and ultimately, a more successful and maintainable software project.
I need your help with the book. I think the book is in pretty good shape, but I’m looking for a handful of people to give it a read and let me know what you think about it. I’m really looking for help in three areas:
Is the subject matter interesting or useful? And do I go into enough detail (or, perhaps, too much)?
Is the content accurate? Or have I made a bit of a fool of myself in some places?
How’s my English? I like to think that I know how to write reasonably well, but with something this substantial, there are always going to be a few errors that will creep in. Let’s get those knocked on the head.
I’m afraid, the timescales are pretty tight. I’d like to get this published towards the end of next week. So please only volunteer if you’ll have time to spend on the project over the next few days.
In exchange, you’ll get a credit in the foreword and a copy of the ebook in whatever format you prefer (as long as that’s EPUB, Mobi or PDF!)
Please get in touch by email or on Twitter or Mastodon if you’d like to help. I’ll send a link to the first half a dozen or so to get in touch.
And, hopefully, I’ll have more news about this next week.
Thanks for reading.
Dave…
Today I’m announcing a brand new addition to my Substack publication: the One Thing at a Time subscriber chat.
This is a conversation space in the Substack app that I set up exclusively for my subscribers — kind of like a group chat or live hangout. I’ll post short prompts, thoughts, and updates that come my way, and you can jump into the discussion.
To join our chat, you’ll need to download the Substack app, now available for both iOS and Android. Chats are sent via the app, not email, so turn on push notifications so you don’t miss conversation as it happens.
Download the app by clicking this link or the button below. Substack Chat is now available on both iOS and Android.
Open the app and tap the Chat icon. It looks like two bubbles in the bottom bar, and you’ll see a row for my chat inside.
That’s it! Jump into my thread to say hi, and if you have any issues, check out Substack’s FAQ.
I've written before about how I use GitHub Workflows to keep "semi-static" web sites up to date. It's a technique that I've found really useful. When I wrote that blog post, things were pretty simple - you chose which branch held your web site (there was a tradition for a while to use gh-pages
) and whether the web site pages were in the repos root directory or the directory called /docs
. I usually put my web site files into the /docs
directory in the master
(now main
) branch and things worked just fine.
The reason for storing the site in /docs
was so that there was a separation between the files that were used to generate the site from the generated output site itself. Many of my repos would have a /tt
directory that contained templates, a /data
directory which contains JSON files or an SQLite database and a /bin
directory with a build
program that pulls all that stuff together and generates a pile of HTML files that end up in the /docs
directory. In my original blog post on this subject, I demonstrated a GitHub Workflow definition that would regenerate the site (when input files changed or on a schedule) and committed any changed files in the /docs
directory. Some GitHub magic would then ensure that the new version of the site was deployed to the GitHub Pages server. All was well with the world.
Then, a few months ago, things got a little more complicated. We gained options about how your GitHub Pages site was deployed. The standard version that I'd be using before was called "deploy from a branch" but there was another option called "GitHub Actions". It seemed likely to me that I really needed to start using the "GitHub Actions" option, but things were still working the old way, and I had far more interesting things to investigate, so I left things the way they were.
Well, I say things were still working in the old way... They were, but something was a bit different. It seemed that the old method was being powered by a new GitHub Workflow called "pages-build-deployment" that had been automatically added to all the repos that needed it. And looking into the details of that workflow, I noticed that it was doing some things that were unnecessary in my repos - for example it assumed that the site was being built using Jekyll and that was only true for a couple of my repos. For most of them, that was unnecessary work. So I needed to look into the new deployment option in more detail.
I started a couple of weeks ago, by simply switching the option from "deploy from a branch" to "GitHub Actions" in the hope that, because I was already using GitHub Actions, things would Just Work. But, unfortunately, that wasn't the case. My new site was being generated and committed to the repo - but the changes weren't showing up on the live site. So I switched things back until I had time to look into in it more detail.
That time was today. It seemed that I needed to include code in my GitHub Workflow that would actually handle the deployment of the site to the GitHub Pages servers. A quick search of the GitHub Actions marketplace found the Deploy GitHub Pages site action which seemed to be the right thing. But reading the documentation, I worked out that it wanted to deploy the site from an artifact, so I needed to create that first. And then I found Upload GitHub Pages artifact which did the right thing. So it was just a case of adding these two actions to my workflows in the correct way.
Previously, my workflows for these sites just needed a single job (called build
) but now I added a deploy
job which depended on build
. For example, the workflow that builds Planet Perl now looks like this:
name: Generate web page
on:
push:
branches: '*'
schedule:
- cron: '37 */4 * * *'
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
container: davorg/perl-perlanet:latest
steps:
- name: Checkout
uses: actions/checkout@v3
- name: Create pages
run: |
mkdir -p docs
perlanet > perlanet.log 2>&1
- name: Commit new page
if: github.repository == 'davorg/planetperl'
run: |
git config --global --add safe.directory /__w/planetperl/planetperl
GIT_STATUS=$(git status --porcelain)
echo $GIT_STATUS
git config user.name github-actions[bot]
git config user.email 41898282+github-actions[bot]@users.noreply.github.com
git add docs/
if [ "$GIT_STATUS" != "" ]; then git commit -m "Automated Web page generation"; fi
if [ "$GIT_STATUS" != "" ]; then git push; fi
- name: Archive perlanet logs
uses: actions/upload-artifact@v3
with:
name: perlanet.log
path: ./perlanet.log
retention-days: 3
- name: Update pages artifact
uses: actions/upload-pages-artifact@v1
with:
path: docs/
deploy:
needs: build
permissions:
pages: write
id-token: write
environment:
name: github-pages
url: ${\{ steps.deployment.outputs.page_url }}
runs-on: ubuntu-latest
steps:
- name: Deploy to GitHub Pages
id: deployment
uses: actions/deploy-pages@v1
The bits that I've added are the final step in the build
job ("Update pages artifact") and the new deploy
job. All of the code is largely copied from the documentation of the two actions I mentioned above.
Having made this changes to one of my planet sites, I switched the deployment method and forced the workflow to run. And was very happy to see it ran successfully and the new version of the site appeared at the live URL as soon as the deployment had changed.
This makes me happy as I feel I'm using the GitHub Pages deployment the way that they're supposed to be used. I've updated all of my planet sites to use this method, but I have several other sites that I'll need to get round to switching at some point.
As always when I find out something new about a GitHub feature, it leaves me with a couple of other suggestions for improvements:
Anyway, I thought I'd share what I had discovered today. Is anyone else generating web sites this way? How do you do it?
This is a story of one of those nice incidents where something starts off simple, then spirals out of control for a while but, in the end, everyone wins.
On Reddit, a few days ago, someone asked ‘Is there a “Planet Perl” with an RSS feed?’ and a few people replied, pointing out the existence of Planet Perl (which is the first Google result for “Planet Perl”). I’m obviously not marketing that site very well as every time I mention it, I get people (pleasantly) surprised that it exists.
On this occasion, it was Elvin Aslanov who seemed to discover my site for the first time. And, very soon afterwards, he started sending pull requests to add feeds to the site. As a result, we now have three more feeds that are being pulled into the site.
You might know that Planet Perl is driven by Perlanet. So adding new feeds is just a case of adding a few lines to a configuration file. And looking at the pull requests I got from Elvin, showed a potential problem in the way the configuration was laid out. Each feed has three lines of YAML configuration. There’s a title for the feed, a URL for a web page that displays the content of the feed and the URL for the feed itself. They’re called “title”, “web” and “url”. And it’s that last name that’s slightly problematic – it’s just not clear enough. Elvin got “web” and “url” muddled up in one of his PRs and, when I pointed that out to him, he suggested that renaming “url” to “feed” would make things much clearer.
I agreed, and the next day I hacked away for a while before releasing version 3.0.0 of Perlanet. In this version, the “url” key is renamed to “feed”. It still accepts the old name (so older config files will still work) but you’ll get a warning if you try to use a config name in the old config.
I didn’t stop there. Last year, I wrote a blog post about producing a docker image that already had Perlanet installed – so that it was quicker to rebuild my various planets every few hours. Since then I’ve been rebuilding that image every time I updated Perlanet. But it’s been rather a manual process. And because I’m old and decrepit, I can never remember the steps I go through to rebuild it, tag it correctly and push it to the Docker Hub. This means it always takes far longer than it’s supposed to. So this time, I wrote a script to do that for me. And because I now have the kind of mind set that sees GitHub Workflows everywhere I look, I wrote a Workflow definition that builds and publishes the image any time the Dockerfile changes. I guess the next step will be to write an action that automatically updates the Dockerfile (thereby triggering the rebuild) each time I release a new version of Perlanet.
But that’s a problem for another day. For now, I’m happy with the improvements I’ve made to Planet Perl, Perlanet and the Perlanet Docker infrastructure.
The post Improvements to Planet Perl and Perlanet appeared first on Perl Hacks.
Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.
Here are my ten favourite gigs of the year. As always, they’re in chronological order.
Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.
A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.
But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.
Did you see any great shows this year? Tell us about them in the comments.
The post 2022 in Gigs appeared first on Davblog.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.
Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.
Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.
Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.
In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.
[This post was generated using ChatGPT]
The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.
I’ve been building Docker containers again. And I think you’ll find this one a little more useful than the Perlanet one I wrote about a couple of weeks ago.
Several years ago I got into Travis CI and set up lots of my GitHub repos so they automatically ran the tests each time I committed to the repo. Later on, I also worked out how to tie those test runs into Coveralls.io so I got pretty graphs of how my test coverage was looking. I gave a talk about what I had done.
But two things changed.
Firstly, Travis CI got too popular and, eventually, removed their free service. And, secondly, GitHub Actions was introduced. Over the last few years, I’ve set up many of my repos to use GitHub Actions for CI. But, basically because I’m lazy, I didn’t remove the Travis CI configuration from those repos.
But last week I decided the time was right to start work on that. And when I went to remove the .travis.yml I realised that something was missing from my GitHub Actions CI workflows — they were running the unit tests, but they weren’t reporting on test coverage. So it was time to fix that.
I needed to reimplement the logic that connected Travis CI to Coveralls.io in a GitHub workflow. That actually turned out to be pretty simple. There’s a CPAN module called Devel::Cover::Report::Coveralls which takes the output from Devel::Cover, converts it to the correct format and sends it to Coveralls.io. And, as a bonus, it has documentation showing how to implement that in a GitHub workflow.
So I hacked at my workflow definition file for one of my CPAN modules and within a few minutes I had it working.
Well, I say “a few minutes”, but it took over thirteen minutes to run. It turns out that Devel::Cover::Report::Coveralls is a pretty heavyweight module and needs to install a lot of other modules in order to do its work.
At this point, you can probably guess where this is going. And you’d be right.
I’ve created a Docker container that has Devel::Cover::Report::Coveralls already installed. And, obviously, it’s available for everyone to use from the Docker hub — davorg/perl-coveralls.
A couple of small adjustments to my GitHub workflow and the coverage job is now running on my new container — and takes 29 seconds instead of 13 minutes. So that’s a win.
The relevant section of my workflow file is here:
coverage:
runs-on: ubuntu-latest
container: davorg/perl-coveralls:latest
name: Test coverage
steps:
- uses: actions/checkout@v3
- name: Install modules
run: cpanm -n --installdeps .
- name: Coverage
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
run: cover -test -report Coveralls
And it’s producing nice graphs on Coveralls.io like the one above.
Let me know if you find it useful.
Originally published at Perl Hacks.
It’s October. And that means that Hacktoberfest has started. If you can get four pull requests accepted on other people’s code repositories during October then you can win a t-shirt.
In many ways, I think it’s a great idea. It encourages people to get involved in open source software. But in other ways, it can be a bit of a pain in the arse. Some people go crazy for a free t-shirt and that means you’ll almost certainly get several pull requests that aren’t really of the quality you’d hope for.
I have a particular problem that probably isn’t very common. I’ve talked before about the “semi-static” sites I run on GitHub Pages. There’s some data in a GitHub Repo and every couple of hours the system wakes up and runs some code which generates a few HTML pages and commits those HTML pages into the repo’s “/docs” directory. And — hey presto! — there’s a new version of your web site.
A good example is Planet Perl. The data is a YAML file which mostly consists of a list of web feeds. Every couple of hours we run perlanet to pull in those web feeds and build a new version of the web site containing the latest articles about Perl.
Can you see what the problem is?
The problem is that the most obvious file in the repo is the “index.html” which is the web site. So when people find that repo and want to make a small change to the web site they’ll change that “index.html” file. But that file is generated. Every few hours, any changes to that file are overwritten as a new version is created. You actually want to change “index.tt”. But that uses Template Toolkit syntax, so it’s easy enough to see why people with no Perl knowledge might want to avoid editing that.
The README file for the project explains which files you might want to change in order to make different types of changes. But people don’t read that. Or, if they do read it, they ignore the bits that they don’t like.
So I get pull requests that I have to reject because they change the wrong files.
Last year I got enough of these problematic pull requests that I decided to automate a solution. And it’s this pretty simple GitHub Workflow. It runs whenever my repo receives a pull request and looks at the files that have been changed. If that list of files includes “docs/index.html” then the PR is automatically closed with a polite message explaining what they’ve done wrong.
This makes my life easier. It’s possible it might make your life easier too.
Originally published at Perl Hacks.
‘Okay Google. Where is Antarctica?”
Children can now get answers to all their questions using smart speakers and digital voice assistants.
A few years ago, children would run to their parents or grandparents to answer their questions. But with the ascendence of voice assistants to the mainstream in recent years, many children rely more on technology than humans.
Is this a good idea?
How does it impact the children?
When children interact with people, it helps them be more thoughtful, creative, and imaginative.
When they use artificial intelligence instead, several issues come into the foreground. These include access to age-inappropriate content and increasing the possibility of being rude or unpleasant, affecting how they treat others.
As mentioned, technology has both pros and cons. There are benefits to children using these devices, including improving diction, communication, social skills, and gaining information without bothering their parents.
Many families find that smart speakers like Amazon Echo and Google Home are useful. They use them for several functions, ranging from answering questions to setting the thermostat. Research shows that up to nine out of ten children between the ages of four and eleven in the US are regularly using smart speakers — often without parental guidance and control. So, what is the best approach for a parent to take?
Children up to seven years old can find it challenging to differentiate between humans and devices, and this can lead to one of the biggest dangers. If the device fulfils their requests through rude behaviour, children may behave similarly to other humans.
Most parents consider it essential that smart devices should encourage polite conversations as a part of nurturing good habits in children. The Campaign for a Commercial-Free Childhood or CCFA is a US coalition of concerned parents, healthcare professionals, and educators. Recently, CCFA protested against Amazon Echo Dot Kids Edition, stating that it may affect children’s wellbeing. Because of this, they requested parents avoid buying Amazon Echo.
However, in reality, these smart devices have improved a lot and focus on encouraging polite conversations with children. It is all about how parents use and present these devices to their children, as these factors can influence them a lot.
But in simple terms, parents wish these devices to encourage politeness in their children. At the same time, they want their kids to understand the difference between artificial intelligence and humans while using these technological innovations.
Many parents have seen their children behave rudely to smart speakers. Several parents have expressed their concerns through social media, blog posts and forums like Mumsnet. They fear these behaviours can impact their kids when they grow up.
A report published in Child Wise reached the conclusion that children who behave rudely to smart devices might be aggressive while they grow up, especially while dealing with other humans. It is, therefore, preferable if children use polite words while interacting with both humans and smart devices.
With interventions and rising concerns addressed by parents and health professionals, some tech companies have brought changes to virtual assistants and smart speakers.
The parental control features available in Alexa focus on training kids to be more polite. Amazon brands it as Magic Word, where the focus is on bringing positive enforcement. However, there is no penalty if children don’t speak politely. Available on Amazon Echo, this tool has added features like setting bedtimes, switching off devices, and blocking songs with explicit lyrics.
When it comes to Google Home, it has brought in a new feature called Pretty Please. Here, Google will perform an action only when children use, please. For instance, “Okay, Google. Please set the timer for 15 minutes.”
You can enable this feature through the Google Family Link, where you can find the settings for Home and Assistant. You can set these new standards for devices of your preference. Also, once you use it and figure things out, there will be no more issues in setting it up again.
These tools and their approaches are highly beneficial for kids and parents. As of now, these devices only offer basic features and limited replies. But with time, there could be technological changes that encourage children to have much more efficient and polite interactions.
It was thinking about issues like this which led me to write my first children’s book — George and the Smart Home. In the book, George is a young boy who has problems getting the smart speakers in his house to do what he wants until he learns to be polite to them.
It is available now, as a paperback and a Kindle book, from Amazon.
Buy it from: AU / BR / CA / DE / ES / FR / IN / IT / JP / MX / NL / UK / US
The post Should Children be Polite While Using Smart Speakers? appeared first on Davblog.
A little later than usual, here’s my review of the gigs I saw last year.
In 2020, I saw four gigs. In 2021, I almost doubled that to seven. Obviously, we spent a lot of the year with most music venues closed, so those few gigs I saw were all in the second half of the year. Usually, I’d list my top ten gigs. This year (as last year) I’ll be listing them all. So here they are in chronological order.
And that was 2021. What will happen in 2022? Well, I have tickets for a dozen or shows but who knows how many of them I’ll actually see? I’ve already had emails postponing the Wolf Alice and Peter Hook shows I was going to see this month. I guess I’ll just have to wait and see how the rest of the year pans out.
The post 2021 in Gigs appeared first on Davblog.
Doctor Who has a new showrunner. But he’s actually an old showrunner. Is that a good idea?
Since the news broke yesterday, Doctor Who fan forums have been discussing nothing but the fact that Russell T Davies is returning as showrunner after Chris Chibnall’s regeneration special is broadcast next year. Most fans seem to be very excited by this prospect; I’m not so sure.
Before I start, I should point out that I’ve been a big fan of Russell T Davies since long before he brought Doctor Who back to our screens in 2005. I’ll always be grateful for the work he did to bring the show back and I believe that he’s responsible for some great moments in Doctor Who history.
But I’m not sure I want to see him back as the showrunner. Let me explain why I’m so out of step with most of the show’s fans.
Firstly, although I’m grateful to him for bringing the show back, he’s not my favourite showrunner. Obviously, any Doctor Who is better than no Doctor Who but there was a lot of stuff in Davies’ first run that I didn’t like. For example, He was the person who first introduced us to companions’ families, which brought a slight soap opera feel to some of the episodes. Also, I thought that he often wrote himself into a bit of a corner. This was most apparent in the end of season two-parters. There were many occasions when the first part set up a fantastic premise only to be let down by a finale that just couldn’t live up to the promise. The Stolen Earth was great; Journey’s End was terrible. Then there’s The End of Time. Again, it started off well but had verged well into the ridiculous by the end of the first part. And don’t get me started on the self-indulgent, mawkish nonsense that made up the last twenty minutes of that story — leading to the Tenth Doctor’s regeneration.
I admit, however, that my opinions on Davies’ writing are purely personal. And, because of the massive rise in popularity of the show during his tenure, many viewers see his approach as the gold standard for how the show should work. My other points are, I hope, less opinion-based.
Secondly, Doctor Who is a show that should always be moving forward. In the classic era of the show, previous Doctors and companions would reappear very rarely. When someone left the show, you knew the chances of seeing them again were very slim. When an executive producer left (we didn’t call them showrunners back then) you knew that the show would change in new and experimental ways. Sometimes the changes didn’t work; most of the time they did. Change is fundamental to the show. It’s how the show has kept going for (most of) sixty years.
The newer sections of the audience don’t seem to realise that. I constantly hear fans wanting things to go back to how things were. As soon as Rose was written out at the end of series two, there were calls for her to come back. And while series four has some pretty good stuff in it, I think that bringing Rose back was pandering to the fanbase in an unhealthy way. We now have a situation where fans expect every character who has been written out of the show to be brought back at their whim. There aren’t very many weeks that pass without me seeing someone in a Facebook group suggesting some convoluted way that David Tennant could be brought back to be the Doctor again.
The show must always move forward. It must always change. I believe that RTD knows that, so I hope that his second era in charge will be sufficiently different to his first. But I worry that fans will start asking for Tennant back as the Doctor with Billie Piper by his side. For some fans, that seems to be the only version of the show they will be happy with.
Finally, I worry about what RTD’s reappointment means for the future of the show. When Chibnall’s departure was announced, all of the news stories claimed that he and Whittaker had a “three and out agreement” between themselves and that he only ever planned to do three years running the show. That’s rather at odds with the talk of him having a five-year plan for the show when he was appointed to the role. I realise that he will have done five years in the post by the time he goes, but he will have made three seasons and a handful of specials — so I’m not sure that counts.
No, I think it’s clear that Chibnall has been hounded out of the role by that toxic sector of the fanbase that refuses to give his work on the show a decent chance. And, given that Moffat also put up with a lot of abuse from certain fans, I begin to wonder how easy it is to find someone to take over the job. Chibnall’s departure was announced at the end of July and the BBC would certainly have known about it for some time before that. But they have failed to find someone new and exciting to take over the job and I wonder if it has become a bit of a poison chalice. People want to do the job because, hey, it’s running Doctor Who! But, on the other hand, if you don’t please the fanbase (and no-one can please all of the fanbase) then you’ll be vilified online and hounded off social media. Add to that the fact that both Davies and Moffat cited insane working schedules as part of their reason for leaving and, suddenly, the job doesn’t look quite as tempting.
I have no inside information here at all, but I wonder if the reappointment of RTD was an act of desperation on the part of the BBC. We know that Chibnall is steering the show up to and including a BBC centenary special that will be broadcast in 2022. But the show’s 60th anniversary is the year after that and without a showrunner, you can’t cast a new Doctor and without a new Doctor in place pretty soon, the 60th-anniversary celebrations would seem to be in danger.
The news of the reappointment has all been very celebratory, of course, but I wonder if that’s actually the case. I wonder if the BBC’s approach to RTD was more like this:
“So, that show you resurrected back in 2005. Well, we can’t find anyone to take over as showrunner, and unless we get things moving pretty quickly we’re not going to have a 60th anniversary worth speaking off. Seriously, we’re thinking of just cancelling it… unless you can suggest something that we could do…”
This, of course, leaves RTD thinking that the only way to save his baby is to step in himself. Maybe he’s stepped in as a stop-gap until the BBC finds someone else to take over. The announcement says he’s signed on for the 60th special and following series. But that’s a bit vague (because the English language doesn’t have a plural for “series”!) so who knows how long he’ll hang around for. Time will tell, I guess.
But, if you’re one of those fans who think it’s big or clever to be unrelentingly negative about the showrunner on social media, please stop and consider whether you’re part of a problem that could end up with no-one wanting the job and the show being cancelled.
All-in-all, I wish that the BBC hadn’t done this. I would have far preferred to see the show moving forward. But if, as I suspect, the alternative was no new Doctor Who for the foreseeable future, then obviously this is a good plan. I’m keen to see what Davies has in store.
But first I’m really excited to see what Chibnall has in store for his final series and the subsequent specials. If series 13 improves on series 12 to the extent that series 12 improved on series 11, then it’s going to be great.
The post The Return of RTD appeared first on Davblog.