Powered by Perlanet
8d8db76
🟩 Aphra is up (200 in 169 ms) [skip ci] [upptime]
7233ade
🟩 Klortho is up (200 in 501 ms) [skip ci] [upptime]
Watched on Tuesday November 19, 2024.
Fixes #58 Add functionality to update and check the "all-feeds" key in local storage. Add code to update the "all-feeds" key in local storage when…
784d449
Make the 'all feeds' checkbox affect local storage
When I first wrote about my pointless personal side projects a few months ago, I used the software I had written to generate my own link site (like a LinkTree clone) as an example.
I’m happy to report that I’ve continued to work on this software. Recently, it passed another milestone—I released a version to CPAN. It’s called App::LinkSite[*]. If you’d like a Link Site of your own, there are a few ways you can achieve that.
In all cases, you’ll want to gather a few pieces of information first. I store mine in a GitHub repo[**].
Most importantly, you’ll need the list of links that you want to display on your site. These go in a file called “links.json“. There are two types of link.
There are also a few bits of header information you’ll want to add:
Put all of that information into “links.json” and put the images in a directory called “img”. Fuller documentation is in the README.
Now you get to decide how you’re going to build your site.
Installed CPAN module
You can install the module (App::LinkSite) using your favourite CPAN installation tool. Then you can just run the “linksite” command and your site will be written to the “docs” directory – which you can then deploy to the web in whatever way you prefer.
Docker image
I build a Docker image whenever I release a new version of the code. That image is released to the Docker hub. So if you like Docker, you can just pull down the “davorg/links:latest” image and go from there.
GitHub Actions and GitHub Pages
But this is my favourite approach. Let GitHub do all the heavy lifting for you. There’s a little bit of set-up you’ll need to do.
Now, whenever you change anything in your repo, your site will be rebuilt and redeployed automatically. There’s also a “run this workflow” under the “Actions” tab of your repo that allows you to run the build and deployment automatically whenever you want.
This is the mechanism I like best – as it’s the least amount of work!
If you try this, please let me know as I’d like to add an “Examples” section to the README file. Also, if you try it and have problems getting it working, then let me know too. It works for me, but I’m sure I’ve forgotten to cater for some specific complexity of how other people would like to use my software. I’m always happy to get suggestions on how to improve things – even if it’s just better documentation.
[*] My continued use of the new Perl class syntax still seems to be causing problems with the CPAN infrastructure. The distribution isn’t being indexed properly.
[**] This shouldn’t be too much of a surprise – I store pretty much everything in a GitHub repo.
The post A link site of your very own appeared first on Perl Hacks.
When I first wrote about my pointless personal side projects a few months ago, I used the software I had written to generate my own link site (like a LinkTree clone) as an example.
I’m happy to report that I’ve continued to work on this software. Recently, it passed another milestone—I released a version to CPAN. It’s called App::LinkSite[*]. If you’d like a Link Site of your own, there are a few ways you can achieve that.
In all cases, you’ll want to gather a few pieces of information first. I store mine in a GitHub repo[**].
Most importantly, you’ll need the list of links that you want to display on your site. These go in a file called “links.json“. There are two types of link.
There are also a few bits of header information you’ll want to add:
Put all of that information into “links.json” and put the images in a directory called “img”. Fuller documentation is in the README.
Now you get to decide how you’re going to build your site.
Installed CPAN module
You can install the module (App::LinkSite) using your favourite CPAN installation tool. Then you can just run the “linksite” command and your site will be written to the “docs” directory – which you can then deploy to the web in whatever way you prefer.
Docker image
I build a Docker image whenever I release a new version of the code. That image is released to the Docker hub. So if you like Docker, you can just pull down the “davorg/links:latest” image and go from there.
GitHub Actions and GitHub Pages
But this is my favourite approach. Let GitHub do all the heavy lifting for you. There’s a little bit of set-up you’ll need to do.
Now, whenever you change anything in your repo, your site will be rebuilt and redeployed automatically. There’s also a “run this workflow” under the “Actions” tab of your repo that allows you to run the build and deployment automatically whenever you want.
This is the mechanism I like best – as it’s the least amount of work!
If you try this, please let me know as I’d like to add an “Examples” section to the README file. Also, if you try it and have problems getting it working, then let me know too. It works for me, but I’m sure I’ve forgotten to cater for some specific complexity of how other people would like to use my software. I’m always happy to get suggestions on how to improve things – even if it’s just better documentation.
[*] My continued use of the new Perl class syntax still seems to be causing problems with the CPAN infrastructure. The distribution isn’t being indexed properly.
[**] This shouldn’t be too much of a surprise – I store pretty much everything in a GitHub repo.
The post A link site of your very own appeared first on Perl Hacks.
Watched on Sunday November 17, 2024.
Watched on Saturday November 16, 2024.
Watched on Tuesday November 12, 2024.
Watched on Sunday November 10, 2024.
Last weekend, we had a very successful (and very enjoyable) London Perl Workshop. After a five-year break, it was great to see so many old faces again. But in addition to people who had been regular attendees at recent workshops, two other groups of people were there in large numbers—people who had moved away from the Perl community (and who were coming back for the nostalgia) and new Perl users who hadn’t been to any Perl conference before. In both cases, it seems that one marketing move was particularly effective at telling both of these groups about the workshop.
It was a small, text advert that ran on MetaCPAN.
I had nothing to do with the organisation of the workshop, so I have no idea who had the idea of running that ad, but it was (like so many great ideas) obvious in retrospect. It’s great to publish blog posts about upcoming events and mention events in the Perl Weekly newsletter. But marketing like that is mostly going to be read by people who are already part of the Perl community. And they (hopefully) already know about the workshop.
Whereas, sites like MetaCPAN are visited by Perl programmers who don’t consider themselves part of the community. People who don’t attend Perl Mongers meetings. People who don’t read blogs.perl.org. People who are (to use terminology that has been used to explain this problem for about twenty years) outside the echo chamber.
Advertising Perl community events to as large an audience as possible is a really good idea, and I think we should do more of it. But it has its downsides. Someone has to do some work to create a pull request to add the advert (and another one to remove it once the event is over). That’s not hard, but it requires thought and planning. I started to wonder if we could simplify this process and, in doing so, encourage more people to run ads like these on sites where more people might see them.
After an hour or so, I had a prototype of the Perl Ad Server – which I have subsequently cleaned up and improved.
It’s a simple enough concept. You add a tiny fragment of Javascript to your website. And that then automatically adds a small banner ad to the top of your site. We can control the ads that are being promoted by simply editing the JSON that we serve to the client sites.
It’s experimental. So I’d like to get as many people as possible to try it out.
It comes with a tiny caveat. I’m neither a web designer nor a Javascript expert. So it may interact with some web frameworks in weird ways (I added it to CPAN Dashboard and the ad appeared under the navbar – which isn’t supposed to happen). If it doesn’t work with your site for some reason, please remove the Javascript and raise an issue so I can investigate.
And if you’d like your event added to the current list of ads, let me know too.
The post Advertising Perl appeared first on Perl Hacks.
Last weekend, we had a very successful (and very enjoyable) London Perl Workshop. After a five-year break, it was great to see so many old faces again. But in addition to people who had been regular attendees at recent workshops, two other groups of people were there in large numbers—people who had moved away from the Perl community (and who were coming back for the nostalgia) and new Perl users who hadn’t been to any Perl conference before. In both cases, it seems that one marketing move was particularly effective at telling both of these groups about the workshop.
It was a small, text advert that ran on MetaCPAN.
I had nothing to do with the organisation of the workshop, so I have no idea who had the idea of running that ad, but it was (like so many great ideas) obvious in retrospect. It’s great to publish blog posts about upcoming events and mention events in the Perl Weekly newsletter. But marketing like that is mostly going to be read by people who are already part of the Perl community. And they (hopefully) already know about the workshop.
Whereas, sites like MetaCPAN are visited by Perl programmers who don’t consider themselves part of the community. People who don’t attend Perl Mongers meetings. People who don’t read blogs.perl.org. People who are (to use terminology that has been used to explain this problem for about twenty years) outside the echo chamber.
Advertising Perl community events to as large an audience as possible is a really good idea, and I think we should do more of it. But it has its downsides. Someone has to do some work to create a pull request to add the advert (and another one to remove it once the event is over). That’s not hard, but it requires thought and planning. I started to wonder if we could simplify this process and, in doing so, encourage more people to run ads like these on sites where more people might see them.
After an hour or so, I had a prototype of the Perl Ad Server – which I have subsequently cleaned up and improved.
It’s a simple enough concept. You add a tiny fragment of Javascript to your website. And that then automatically adds a small banner ad to the top of your site. We can control the ads that are being promoted by simply editing the JSON that we serve to the client sites.
It’s experimental. So I’d like to get as many people as possible to try it out.
It comes with a tiny caveat. I’m neither a web designer nor a Javascript expert. So it may interact with some web frameworks in weird ways (I added it to CPAN Dashboard and the ad appeared under the navbar – which isn’t supposed to happen). If it doesn’t work with your site for some reason, please remove the Javascript and raise an issue so I can investigate.
And if you’d like your event added to the current list of ads, let me know too.
The post Advertising Perl appeared first on Perl Hacks.
After a break of five years, the London Perl Workshop returns next weekend. It’s been twenty years since the first one. This year’s event is at a new venue called The Trampery, which is very close to Old Street tube station. If you’re a veteran of the early-2000s “Silicon Roundabout” excitement, you’ll know the place as it’s right across the road from The Foundry – the place to be seen if you were working in that area. I seem to remember taking people there as part of the entertainment programme for the first YAPC::Europe back in 2000. The pub isn’t there any more – it’s been knocked down and replaced by a modern hotel.
So I thought I’d have a look at the schedule and point out some of the interesting talks that I’m looking forward to seeing next Saturday.
Following Julian’s introduction, there’s an early indication of the quality of the talks – as there’s a massive clash. I understand how most people would want to see Paul Evans talking about Perl in 2030, but at the same time I’ll be talking about PerlDiver in the other main room. Now, obviously, I don’t want to sway anyone’s decision about which talk to choose – but I will have swag to give away. I don’t have any choice over which room I’ll be in, but I really want to see Paul’s talk, so I’m hoping it’ll be videoed.
I should point out that alongside the two main tracks, there’s a third room that the organisers hope will be used for hackathons, BOFs and things like that.
Next up we have Dave Lambley with Cloudy Perl, how it looks now and Richard Hainsworth with Using new RakuDoc v2. AWS Lambdas and Raku are both things that I’ve never really wrapped my head around. But I suspect that AWS will be more useful to me in the long run, so I’ll be with Dave.
The next choice is between Stuart Mackintosh’s TPRF Presentation and discussion and Ralf Langsdorf who sounds like he’s battling against Little Bobby Tables. I don’t do much community leadership stuff these days, but I’m glad there are still people who are willing to take on those roles – so I’ll be with Stuart. But Ralf’s is another talk that I’ll be looking forward to catching up with later.
In the final slot of the morning, the breakout room is hosting a talk – giving you a choice of three: James Green with I Adopted a Dist, and This is What Happened, José Joaquín Atria with Using OpenTelemetry in your Perl libraries and applications and Steve Roe with Raku HTML::Functional. I love a good CPAN module story, so I’ll be with James.
There’s then an hour for lunch. I’ll be trying one of the many local sandwich shops or pubs.
First up in the afternoon is a choice between Salve J. Nilsen – Metadata, CPAN, FOSS Supply Chains, and EU’s Cyber Resilience Act, Leon Timmermans – A modern introduction to XS and Andrew O’Neil – Chemometrics with Perl & Pharmaceutical Applications. Salve is covering a topic that, while seemingly dull, is something that more people need to take notice of. Leon is covering another topic that I’ve never quite understood. And Andrew is doing science – which is always fun. I think I’ll be listening to Leon in the hope I can finally understand XS.
Next up is Mohammad Anwar explaining What’s new in Perl v5.40?, Paul Cochrane on Fixing a fifteen-year-old curve fit bug or Salve is following up on his talk with a CPAN security BOF in the breakout room. Mohammad is always worth watching, but I very much enjoyed reading Paul’s blog post on this bug recently – so I haven’t decided yet where I’ll be.
Then we have a choice between Max Maischein – Managing recent files from Perl (which sounds useful), Nana Insaidoo – opensource vulnerabilities and where to find them (which sounds vital!) and the continuation of Salve’s BOF.
The last slot in this session is Mike Whitaker – Names are hard (which is inarguable), Yujia Zhai – Perl/Raku Based Coursework Design for Engineering Education or even more of Salve’s BOF.
Then we have a twenty-minute break. Phew. I think we’ll all need it after all that top-notch information.
Things get a bit disjointed after the break. There’s Andrew Soloman with Perl Talent Management: How Logicly attracts, develops, and retains their Perl developers. But that overlaps with two talks in the other main room – Ian Boddison with Perl, AI and Your System followed by Saif Ahmed with Bit Vector Arrays to Pixels in a Terminal with One Subroutine. And while all that’s going on, the breakout room has a session of Science Perl Talks chaired by Andrew Neil. They all sound interesting, but my interest in training trumps my other interests, so I’ll be listening to Andrew Soloman.
And then there are the lighting talks. As always, there will be a wide range of talks. I’ll be giving a brief update on Perl School. If you’ve been watching my social media, you might have an idea what I’ll be announcing.
Then we have the intriguingly-named Thanks & The Future of LPW before we all head off to the pub for a debrief.
There are currently about 140 people signed up for the workshop. I don’t know what the venue capacity is, but I’m sure they can squeeze a more in if you’d like to register. Oh, and if you are registered, please mark the talks you’re interested in on the schedule – it makes it easier for the organisers to decide which talks should be in which rooms.
Of course, if you’re not within easy travelling distance of London, then don’t forget that there are other Perl events in other parts of the world. In particular, there will be a TPRC in Greenville, South Carolina next year. See https://tprc.us/ for details.
To close, I’d like to thank the sponsors of this year’s LPW. Without them, the workshop wouldn’t be able to go ahead. Please show them some love.
The post London Perl Workshop 2024 – Preview appeared first on Perl Hacks.
After a break of five years, the London Perl Workshop returns next weekend. It’s been twenty years since the first one. This year’s event is at a new venue called The Trampery, which is very close to Old Street tube station. If you’re a veteran of the early-2000s “Silicon Roundabout” excitement, you’ll know the place as it’s right across the road from The Foundry – the place to be seen if you were working in that area. I seem to remember taking people there as part of the entertainment programme for the first YAPC::Europe back in 2000. The pub isn't there any more - it's been knocked down and replaced by a modern hotel.
So I thought I’d have a look at the schedule and point out some of the interesting talks that I’m looking forward to seeing next Saturday.
Following Julian’s introduction, there’s an early indication of the quality of the talks – as there’s a massive clash. I understand how most people would want to see Paul Evans talking about Perl in 2030, but at the same time I’ll be talking about PerlDiver in the other main room. Now, obviously, I don’t want to sway anyone’s decision about which talk to choose – but I will have swag to give away. I don’t have any choice over which room I’ll be in, but I really want to see Paul’s talk, so I’m hoping it’ll be videoed.
I should point out that alongside the two main tracks, there’s a third room that the organisers hope will be used for hackathons, BOFs and things like that.
Next up we have Dave Lambley with Cloudy Perl, how it looks now and Richard Hainsworth with Using new RakuDoc v2. AWS Lambdas and Raku are both things that I’ve never really wrapped my head around. But I suspect that AWS will be more useful to me in the long run, so I’ll be with Dave.
The next choice is between Stuart Mackintosh’s TPRF Presentation and discussion and Ralf Langsdorf who sounds like he’s battling against Little Bobby Tables. I don’t do much community leadership stuff these days, but I’m glad there are still people who are willing to take on those roles – so I’ll be with Stuart. But Ralf’s is another talk that I’ll be looking forward to catching up with later.
In the final slot of the morning, the breakout room is hosting a talk – giving you a choice of three: James Green with I Adopted a Dist, and This is What Happened, José Joaquín Atria with Using OpenTelemetry in your Perl libraries and applications and Steve Roe with Raku HTML::Functional. I love a good CPAN module story, so I’ll be with James.
There’s then an hour for lunch. I’ll be trying one of the many local sandwich shops or pubs.
First up in the afternoon is a choice between Salve J. Nilsen – Metadata, CPAN, FOSS Supply Chains, and EU’s Cyber Resilience Act, Leon Timmermans – A modern introduction to XS and Andrew O’Neil – Chemometrics with Perl & Pharmaceutical Applications . Salve is covering a topic that, while seemingly dull, is something that more people need to take notice of. Leon is covering another topic that I’ve never quite understood. And Andrew is doing science – which is always fun. I think I’ll be listening to Leon in the hope I can finally understand XS.
Next up is Mohammad Anwar explaining What’s new in Perl v5.40?, Paul Cochrane on Fixing a fifteen-year-old curve fit bugor Salve is following up on his talk with a CPAN security BOF in the breakout room. Mohammad is always worth watching, but I very much enjoyed reading Paul’s blog post on this bug recently – so I haven’t decided yet where I’ll be.
Then we have a choice between Max Maischein – Managing recent files from Perl (which sounds useful), Nana Insaidoo – opensource vulnerabilities and where to find them (which sounds vital!) and the continuation of Salve’s BOF.
The last slot in this session is Mike Whitaker – Names are hard (which is inarguable), Yujia Zhai – Perl/Raku Based Coursework Design for Engineering Education or even more of Salve’s BOF.
Then we have a twenty-minute break. Phew. I think we’ll all need it after all that top-notch information.
Things get a bit disjointed after the break. There’s Andrew Soloman with Perl Talent Management: How Logicly attracts, develops, and retains their Perl developers. But that overlaps with two talks in the other main room – Ian Boddison with Perl, AI and Your System followed by Saif Ahmed with Bit Vector Arrays to Pixels in a Terminal with One Subroutine. And while all that’s going on, the breakout room has a session of Science Perl Talks chaired by Andrew Neil. They all sound interesting, but my interest in training trumps my other interests, so I’ll be listening to Andrew Soloman.
And then there are the lighting talks. As always, there will be a wide range of talks. I’ll be giving a brief update on Perl School. If you’ve been watching my social media, you might have an idea what I’ll be announcing.
Then we have the intriguingly-named Thanks & The Future of LPW before we all head off to the pub for a debrief.
There are currently about 140 people signed up for the workshop. I don’t know what the venue capacity is, but I’m sure they can squeeze a more in if you’d like to register. Oh, and if you are registered, please mark the talks you’re interested in on the schedule – it makes it easier for the organisers to decide which talks should be in which rooms.
Of course, if you’re not within easy travelling distance of London, then don’t forget that there are other Perl events in other parts of the world. In particular, there will be a TPRC in Greenville, South Carolina next year. See https://tprc.us/ for details.
To close, I’d like to thank the sponsors of this year’s LPW. Without them, the workshop wouldn’t be able to go ahead. Please show them some love.
The post London Perl Workshop 2024 – Preview appeared first on Perl Hacks.
Over the last few months, I’ve been dabbling in using AI to generate or improve code. I have a subscription to GitHub Copilot and I’m finding it a really useful tool for increasing my productivity. Copilot comes in several different flavours, and I’ve been making particular use of a couple of them.
Those two tools alone make me a more efficient programmer. And they’re well worth the $10 a month I pay for my Copilot subscription. But recently I was invited to the preview of Copilot Workspace. And that’s a whole new level. Copilot Workspace takes a GitHub issue as its input and returns a complete, multifile pull request that implements the required change. I’ve been playing with it for small tweaks, but I decided the time was right to do something more substantial. I planned to write an entire Dancer app by defining issues and asking Copilot to implement the code. Here’s what happened. You can follow along at the GitHub repo.
I decided I would start from the standard, automatically generated Dancer2 app. So I ran dancer2 gen -a Example and committed the output from that. It was then time for the first issue. I decided to start by adding (empty) routes for user registration and login. I opened the issue in the Copilot Workspace and asked the AI for some suggested code. It didn’t really understand the idea of empty routes – but the pull request seemed pretty good. I merged the PR and moved on to the next issue – to add basic registration and login screens. Again, the pull request did a little more than I asked for – adding a bit more registration and login logic – but the code was good.
As an aside, you’ll notice that the PRs are all correctly linked to the correct issues and contain substantial information about the changes. This is all generated by the AI.
For the next step, we needed a database table to store the users. I asked Copilot to use SQLite and it gave me what I wanted – once again, going above and beyond. For the first time, its overenthusiasm was slightly annoying, because it added some database code to store new users and I hadn’t told it that we would be using DBIx::Class. So that was the next issue and the next pull request. Note that the pull request even includes adding DBIx::Class to the requisites in Makefile.PL.
Time for some unit tests (ok, maybe the best time was a few PRs ago!). The issue description was simple – “Write unit tests for everything we have so far“. Maybe it was too simple – as this was the first time the AI seemed to struggle a bit. I was merging the PRs without really checking them and the PR introduced a lot of useful tests – but many of them failed. Part of the problem here is that (as far as I can see) Copilot Workspace has no way to run the code it produces – so it was guessing how well it was doing. It took a few iterations to get that right – it basically boiled down to the database schema not being loaded into the database before the tests were run. At times while we were working through these problems, I was reminded of someone (I think it was Simon Willison) describing an AI programming assistant as “an overconfident, overenthusiastic intern”. Luckily, unlike an intern, Copilot never gets annoyed with you telling it to try again and providing more and more information to help it get to the bottom of a problem.
After a while, we had a working test suite and were back on track.
So we were back at adding features to the application. I decided the next thing we needed was to display the logged-in user’s username and email address on the main page. That seemed simple enough and worked first time. About this time I was getting annoyed with the standard Dancer2 web page, so we removed most of that. Then I switched from Dancer’s default “simple” templating system to the Template Toolkit [issue / PR].
While we were tidying up the look and feel, we added login and logout buttons [issue / PR] and a register button on the logged out page [issue / PR]. This led to some more confusion for a while as logging out didn’t work. It turned out the AI had used outdated code to destroy the session and I had to get very specific before it would do the right thing [issue / PR].
We then added some more tests [issue / PR], displayed registration and login errors [issue / PR] and ensured we were storing the passwords in encrypted form (to be honest, I’m slightly disappointed that the AI didn’t do that by default) [issue / PR].
At this point (and I don’t know why I didn’t do it sooner), we replaced the UI with something using Bootstrap [issue / PR]. That led to a bit more tweaking of the buttons [issue / PR].
At this point, I had basically got to where I wanted to be. I had an app that didn’t do anything useful, but let you register, login and log out. And I’d done it all pretty quickly and without writing very much code.
Then I decided to push it too far.
The thing that I actually wanted to achieve at this point was to add social registration and login to the site. I created an issue – Allow users to register and login using a Google account – and Copilot gave me some code. But at this point, it’s not just about code. You also need to configure stuff at Google in order to get this working. And, while Copilot gave me some information about what I needed to do, I haven’t yet been able to get it working. This is a good example of the limitations of AI-powered programming. It’s great at generating code, but (so far, at least) not so good at keeping up to date with how to interface with external systems. Oh, and there’s the problem we saw earlier about it not actually running the tests.
So, how do I think the experiment went? I was impressed. There was a lot of code generated that was as good or better than I would have written myself. There are certainly the problems that I mentioned above, but this stuff is improving at such an incredible rate that I really can’t see those problems still existing in a year.
I’ve started using Copilot Workspace for a lot more of my projects. And I’m happy with the results I’ve got.
What about you? Have you used any version of Copilot to help with your coding? How successful has it been?
The post Dancing with Copilot Workspace appeared first on Perl Hacks.
Over the last few months, I’ve been dabbling in using AI to generate or improve code. I have a subscription to GitHub Copilot and I’m finding it a really useful tool for increasing my productivity. Copilot comes in several different flavours, and I’ve been making particular use of a couple of them.
Those two tools alone make me a more efficient programmer. And they’re well worth the $10 a month I pay for my Copilot subscription. But recently I was invited to the preview of Copilot Workspace. And that’s a whole new level. Copilot Workspace takes a GitHub issue as its input and returns a complete, multifile pull request that implements the required change. I’ve been playing with it for small tweaks, but I decided the time was right to do something more substantial. I planned to write an entire Dancer app by defining issues and asking Copilot to implement the code. Here’s what happened. You can follow along at the GitHub repo.
I decided I would start from the standard, automatically generated Dancer2 app. So I ran dancer2 gen -a Example and committed the output from that. It was then time for the first issue. I decided to start by adding (empty) routes for user registration and login. I opened the issue in the Copilot Workspace and asked the AI for some suggested code. It didn’t really understand the idea of empty routes – but the pull request seemed pretty good. I merged the PR and moved on to the next issue – to add basic registration and login screens. Again, the pull request did a little more than I asked for – adding a bit more registration and login logic – but the code was good.
As an aside, you’ll notice that the PRs are all correctly linked to the correct issues and contain substantial information about the changes. This is all generated by the AI.
For the next step, we needed a database table to store the users. I asked Copilot to use SQLite and it gave me what I wanted – once again, going above and beyond. For the first time, its overenthusiasm was slightly annoying, because it added some database code to store new users and I hadn’t told it that we would be using DBIx::Class. So that was the next issue and the next pull request. Note that the pull request even includes adding DBIx::Class to the requisites in Makefile.PL.
Time for some unit tests (ok, maybe the best time was a few PRs ago!). The issue description was simple – “Write unit tests for everything we have so far“. Maybe it was too simple – as this was the first time the AI seemed to struggle a bit. I was merging the PRs without really checking them and the PR introduced a lot of useful tests – but many of them failed. Part of the problem here is that (as far as I can see) Copilot Workspace has no way to run the code it produces – so it was guessing how well it was doing. It took a few iterations to get that right – it basically boiled down to the database schema not being loaded into the database before the tests were run. At times while we were working through these problems, I was reminded of someone (I think it was Simon Willison) describing an AI programming assistant as “an overconfident, overenthusiastic intern”. Luckily, unlike an intern, Copilot never gets annoyed with you telling it to try again and providing more and more information to help it get to the bottom of a problem.
After a while, we had a working test suite and were back on track.
So we were back at adding features to the application. I decided the next thing we needed was to display the logged-in user’s username and email address on the main page. That seemed simple enough and worked first time. About this time I was getting annoyed with the standard Dancer2 web page, so we removed most of that. Then I switched from Dancer’s default “simple” templating system to the Template Toolkit [issue / PR].
While we were tidying up the look and feel, we added login and logout buttons [issue / PR] and a register button on the logged out page [issue / PR]. This led to some more confusion for a while as logging out didn’t work. It turned out the AI had used outdated code to destroy the session and I had to get very specific before it would do the right thing [issue / PR].
We then added some more tests [issue / PR], displayed registration and login errors [issue / PR] and ensured we were storing the passwords in encrypted form (to be honest, I’m slightly disappointed that the AI didn’t do that by default) [issue / PR].
At this point (and I don’t know why I didn’t do it sooner), we replaced the UI with something using Bootstrap [issue / PR]. That led to a bit more tweaking of the buttons [issue / PR].
At this point, I had basically got to where I wanted to be. I had an app that didn’t do anything useful, but let you register, login and log out. And I’d done it all pretty quickly and without writing very much code.
Then I decided to push it too far.
The thing that I actually wanted to achieve at this point was to add social registration and login to the site. I created an issue – Allow users to register and login using a Google account – and Copilot gave me some code. But at this point, it’s not just about code. You also need to configure stuff at Google in order to get this working. And, while Copilot gave me some information about what I needed to do, I haven’t yet been able to get it working. This is a good example of the limitations of AI-powered programming. It’s great at generating code, but (so far, at least) not so good at keeping up to date with how to interface with external systems. Oh, and there’s the problem we saw earlier about it not actually running the tests.
So, how do I think the experiment went? I was impressed. There was a lot of code generated that was as good or better than I would have written myself. There are certainly the problems that I mentioned above, but this stuff is improving at such an incredible rate that I really can’t see those problems still existing in a year.
I’ve started using Copilot Workspace for a lot more of my projects. And I’m happy with the results I’ve got.
What about you? Have you used any version of Copilot to help with your coding? How successful has it been?
The post Dancing with Copilot Workspace first appeared on Perl Hacks.
We need programmers who like to play on the bleading edge. By trying out new features, they are able to report on problems that they find – and, in doing so, improve the experience for the many people who follow them.
I’m not usually much of a bleading edge programmer. But I’ve been enjoying Perl’s new object-oriented programming features, so I’ve been using them a lot. And, in the process, I’ve found a few issues that I’ve reported (or, in a couple of cases, will report) to the relevant people.
Often, the problem that the bleading edgers come across are problems with the feature itself. That’s not the case with me. I’ve been finding problems with how Perl’s infrastructure deals with the new feature.
And please note, it would be easy to interpret this blog post as me complaining about these tools being “broken” because they aren’t keeping up with the development of the language. That’s not the case at all. I realise that these infrastructure projects are all run by volunteers and I’m grateful for all these people do – working for free, keeping these systems (systems that we often tend to take for granted) running. In cases where I think I would be at all useful I have, of course, offered my helping in implementing these fixes.
So what are the problems?
The first CPAN module I wrote that used the new class syntax was Amazon::Sites. As soon as I uploaded it, I knew something was awry. I got an email from the PAUSE indexer saying that it couldn’t understand my distribution tarball. I wasn’t sure what the problem was, but within an hour I got a follow-up email from Neil Bowers pointing out that PAUSE couldn’t find a package statement in my module. That’s not surprising, as the new class syntax uses class as a replacement for package. And PAUSE hadn’t been updated to recognise that syntax. Before emailing me, Neil had taken the time to raise an issue in the PAUSE repo and he suggested that the upcoming Perl Toolchain Summit would be a good opportunity to fix the problem. He also suggested that added a (strictly speaking, spurious) package line to the code would be a good workaround. I did that and uploaded a new version – which worked fine. And PAUSE was updated at the PTS. In the intervening time, I released a couple more modules that used the new syntax – so they also have the extra package line.
The next problem is one that probably only affects me. Back in January, I wrote about some reusable GitHub Actions that I had developed for Perl code. Although it’s not mentioned in the blog post, I had added an action that uses Perl::Metrics::Simple to report on the complexity of my Perl code. I noticed that it was showing strange results for my modules that used the new syntax. Specifically, it wasn’t correctly reporting the complexity of code in methods. The reason is obvious, when you think about it. It’s just that Perl::Metrics::Simple doesn’t recognise the method keyword that is used in place of sub in the new OO syntax. I raised an issue in the repo for the module – optimistically promising a pull request in a few days. That didn’t happen as the problem is actually in PPI – which Perl::Metrics::Simple uses to parse the code. And there’s already a ticket to add all of the new keywords to PPI. Sadly, I don’t think my Perl is up to taking on this fix for the PPI team.
Given that the PAUSE issue I mentioned before has now been fixed, when I came to release App::LastStats recently I didn’t add the extra package line that had become my habit. It turns out that was a mistake. While my new module sailed past PAUSE, it seems that the lack of a package definition confuses MetaCPAN too. While my new module was being indexed by PAUSE and ending up in the 02Packages file correctly (so it was installable using tools like cpanm), it wasn’t appearing in MetaCPAN search or on my author page. Chatting with Olaf Alders on the #metacpan IRC channel, he spotted that the status of the release wasn’t being set to “latest” by the MetaCPAN ingestion code. Adding the same package line to the code soon fixed that problem too. Hopefully I’ll be able to work out where to fix the MetaCPAN code so it recognises class as a synonym for package. But, until that happens, anyone uploading a module to CPAN that uses the new syntax (is that really only me?) will need to add the package line.
There’s one more class of problem that I’m still trying to work out. And that’s down to my use of Feature::Compat::Class to make these modules compatible with versions of Perl that don’t support the new syntax. Part of the problem here is that we now have two versions of Perl that support the new syntax – 5.38 and 5.40. But they support slightly different versions of the syntax – that’s to be expected, of course; it’s how the new feature is being written.
The way that Feature::Compat::Class works is that it checks the version of Perl and if it is running on a version less than 5.38, then it loads another module called Object::Pad – which is a test bed for the new class syntax. Object::Pad supports more of the planned new syntax that has been actually released yet. So when Feature::Compat::Class loads Object::Pad, it uses a flag which tells Object::Pad to only allow the syntax that has been released in a Perl release. But which syntax? From which release? I guess it depends on which version of Object::Pad I’m using. Presumably, a version that was released after Perl 5.40 will support all of 5.40’s new syntax. And if I write code that uses the newest syntax, what happens when someone tries to run it on Perl 5.38? Currently, I’m only using 5.38’s syntax, so I’m not sure yet. And this is a problem that will get worse as future versions of Perl add more features to the class syntax.
I don’t think my new modules have many users – they’re very niche, so this is probably only a problem that I need to solve for myself. And I’m solving it by running the code in Docker containers that have the latest version of Perl installed. But it’s something I’ll need to think about more deeply if any of these modules become more widely used. Maybe I just encourage people to use them via the Docker images.
Oh, one final thing. The new class syntax is experimental. Some people would, I suppose, say that’s a good reason not to use it in CPAN module – but, hey, bleading edge But that means it produces loads of “experimental” warnings if you don’t explicitly add code to suppress then. That code is no warnings 'experimental::class'. But that doesn’t compile on a Perl earlier than 5.38 (because it’s not a recognised warning category on a version of Perl where the feature is unimplemented). So I need to look at using if to only turn off those warnings on the correct versions of Perl.
I don’t want to put anyone off using the new class syntax. I think it’s a great new tool and I’m looking forward to seeing it become more powerful as each new version of Perl is released. I just want people to realise that you will hit certain speedbumps by being an early adopter of features like this.
Have you tried the new syntax? What do you think of it?
The post On the [b]leading edge appeared first on Perl Hacks.
We need programmers who like to play on the bleading edge. By trying out new features, they are able to report on problems that they find – and, in doing so, improve the experience for the many people who follow them.
I’m not usually much of a bleading edge programmer. But I’ve been enjoying Perl’s new object-oriented programming features, so I’ve been using them a lot. And, in the process, I’ve found a few issues that I’ve reported (or, in a couple of cases, will report) to the relevant people.
Often, the problem that the bleading edgers come across are problems with the feature itself. That’s not the case with me. I’ve been finding problems with how Perl’s infrastructure deals with the new feature.
And please note, it would be easy to interpret this blog post as me complaining about these tools being “broken” because they aren’t keeping up with the development of the language. That’s not the case at all. I realise that these infrastructure projects are all run by volunteers and I’m grateful for all these people do – working for free, keeping these systems (systems that we often tend to take for granted) running. In cases where I think I would be at all useful I have, of course, offered my helping in implementing these fixes.
So what are the problems?
The first CPAN module I wrote that used the new class syntax was Amazon::Sites. As soon as I uploaded it, I knew something was awry. I got an email from the PAUSE indexer saying that it couldn’t understand my distribution tarball. I wasn’t sure what the problem was, but within an hour I got a follow-up email from Neil Bowers pointing out that PAUSE couldn’t find a package
statement in my module. That’s not surprising, as the new class syntax uses class
as a replacement for package
. And PAUSE hadn’t been updated to recognise that syntax. Before emailing me, Neil had taken the time to raise an issue in the PAUSE repo and he suggested that the upcoming Perl Toolchain Summit would be a good opportunity to fix the problem. He also suggested that added a (strictly speaking, spurious) package
line to the code would be a good workaround. I did that and uploaded a new version – which worked fine. And PAUSE was updated at the PTS. In the intervening time, I released a couple more modules that used the new syntax – so they also have the extra package
line.
The next problem is one that probably only affects me. Back in January, I wrote about some reusable GitHub Actions that I had developed for Perl code. Although it’s not mentioned in the blog post, I had added an action that uses Perl::Metrics::Simple to report on the complexity of my Perl code. I noticed that it was showing strange results for my modules that used the new syntax. Specifically, it wasn’t correctly reporting the complexity of code in methods. The reason is obvious, when you think about it. It’s just that Perl::Metrics::Simple doesn’t recognise the method
keyword that is used in place of sub
in the new OO syntax. I raised an issue in the repo for the module – optimistically promising a pull request in a few days. That didn’t happen as the problem is actually in PPI – which Perl::Metrics::Simple uses to parse the code. And there’s already a ticket to add all of the new keywords to PPI. Sadly, I don’t think my Perl is up to taking on this fix for the PPI team.
Given that the PAUSE issue I mentioned before has now been fixed, when I came to release App::LastStats recently I didn’t add the extra package
line that had become my habit. It turns out that was a mistake. While my new module sailed past PAUSE, it seems that the lack of a package
definition confuses MetaCPAN too. While my new module was being indexed by PAUSE and ending up in the 02Packages
file correctly (so it was installable using tools like cpanm
), it wasn’t appearing in MetaCPAN search or on my author page. Chatting with Olaf Alders on the #metacpan
IRC channel, he spotted that the status of the release wasn’t being set to “latest” by the MetaCPAN ingestion code. Adding the same package
line to the code soon fixed that problem too. Hopefully I’ll be able to work out where to fix the MetaCPAN code so it recognises class
as a synonym for package
. But, until that happens, anyone uploading a module to CPAN that uses the new syntax (is that really only me?) will need to add the package
line.
There’s one more class of problem that I’m still trying to work out. And that’s down to my use of Feature::Compat::Class to make these modules compatible with versions of Perl that don’t support the new syntax. Part of the problem here is that we now have two versions of Perl that support the new syntax – 5.38 and 5.40. But they support slightly different versions of the syntax – that’s to be expected, of course; it’s how the new feature is being written.
The way that Feature::Compat::Class works is that it checks the version of Perl and if it is running on a version less than 5.38, then it loads another module called Object::Pad – which is a test bed for the new class syntax. Object::Pad supports more of the planned new syntax that has been actually released yet. So when Feature::Compat::Class loads Object::Pad, it uses a flag which tells Object::Pad to only allow the syntax that has been released in a Perl release. But which syntax? From which release? I guess it depends on which version of Object::Pad I’m using. Presumably, a version that was released after Perl 5.40 will support all of 5.40’s new syntax. And if I write code that uses the newest syntax, what happens when someone tries to run it on Perl 5.38? Currently, I’m only using 5.38’s syntax, so I’m not sure yet. And this is a problem that will get worse as future versions of Perl add more features to the class syntax.
I don’t think my new modules have many users – they’re very niche, so this is probably only a problem that I need to solve for myself. And I’m solving it by running the code in Docker containers that have the latest version of Perl installed. But it’s something I’ll need to think about more deeply if any of these modules become more widely used. Maybe I just encourage people to use them via the Docker images.
Oh, one final thing. The new class syntax is experimental. Some people would, I suppose, say that’s a good reason not to use it in CPAN module – but, hey, bleading edge :-) But that means it produces loads of “experimental” warnings if you don’t explicitly add code to suppress then. That code is no warnings 'experimental::class'
. But that doesn’t compile on a Perl earlier than 5.38 (because it’s not a recognised warning category on a version of Perl where the feature is unimplemented). So I need to look at using if to only turn off those warnings on the correct versions of Perl.
I don’t want to put anyone off using the new class syntax. I think it’s a great new tool and I’m looking forward to seeing it become more powerful as each new version of Perl is released. I just want people to realise that you will hit certain speedbumps by being an early adopter of features like this.
Have you tried the new syntax? What do you think of it?
The post On the [b]leading edge first appeared on Perl Hacks.
Royal titles in the United Kingdom carry a rich tapestry of history, embodying centuries of tradition while adapting to the changing landscape of the modern world. This article delves into the structure of these titles, focusing on significant changes made during the 20th and 21st centuries, and how these rules affect current royals.
The framework for today’s royal titles was significantly shaped by the Letters Patent issued by King George V in 1917. This document was pivotal in redefining who in the royal family would be styled with “His or Her Royal Highness” (HRH) and as a prince or princess. Specifically, the 1917 Letters Patent restricted these styles to:
This move was partly in response to the anti-German sentiment of World War I, aiming to streamline the monarchy and solidify its British identity by reducing the number of royals with German titles.
Notice that the definitions talk about “a sovereign”, not “the sovereign”. This means that when the sovereign changes, no-one will lose their royal title (for example, Prince Andrew is still the son of a sovereign, even though he is no longer the son of the sovereign). However, people can gain royal titles when the sovereign changes — we will see examples below.
Understanding the implications of the existing rules as his family grew, King George VI issued a new Letters Patent in 1948 to extend the style of HRH and prince/princess to the children of the future queen, Princess Elizabeth (later Queen Elizabeth II). This was crucial as, without this adjustment, Princess Elizabeth’s children would not automatically have become princes or princesses because they were not male-line grandchildren of the monarch. This ensured that Charles and Anne were born with princely status, despite being the female-line grandchildren of a monarch.
Queen Elizabeth II’s update to the royal titles in 2012 before the birth of Prince William’s children was another significant modification. The Letters Patent of 2012 decreed that all the children of the eldest son of the Prince of Wales would hold the title of HRH and be styled as prince or princess, not just the eldest son. This move was in anticipation of changes brought about by the Succession to the Crown Act of 2013, which ended the system of male primogeniture, ensuring that the firstborn child of the Prince of Wales, regardless of gender, would be the direct heir to the throne. Without this change, there could have been a situation where Prince William’s first child (and the heir to the throne) was a daughter who wasn’t a princess, whereas her eldest (but younger) brother would have been a prince.
As the royal family branches out, descendants become too distanced from the throne, removing their entitlement to HRH and princely status. For example, the Duke of Gloucester, Duke of Kent, Prince Michael of Kent and Princess Alexandra all have princely status as male-line grandchildren of George V. Their children are all great-grandchildren of a monarch and, therefore, do not all have royal styles or titles. This reflects a natural trimming of the royal family tree, focusing the monarchy’s public role on those directly in line for succession.
The evolution of British royal titles reflects both adherence to deep-rooted traditions and responsiveness to modern expectations. These titles not only delineate the structure and hierarchy within the royal family but also adapt to changes in societal norms and the legal landscape, ensuring the British monarchy remains both respected and relevant in the contemporary era.
Originally published at https://blog.lineofsuccession.co.uk on April 25, 2024.
Royal Titles Decoded: What Makes a Prince or Princess? — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
Originally published at https://blog.dave.org.uk on April 7, 2024.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
The post The Tourist appeared first on Davblog.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022–12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year — in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year — They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
Originally published at https://blog.dave.org.uk on December 31, 2023.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
The post 2023 in Gigs appeared first on Davblog.
Her Majesty has, of course, seen changes in many areas of society in the seventy years of her reign. But here, we’re most interested in the line of succession. So we thought it would be interesting to look at the line of succession on the day that she took the throne and see what had happened to the people who were at the top of the line of succession on that day. It’s a very different list to today’s.
I think that’s an interesting list for a few reasons:
So what do you think? Was the 1952 list a surprise to you? Did you expect it to be as different as it is from the current list?
Originally published at https://blog.lineofsuccession.co.uk on February 7, 2022.
Seventy Years of Change — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…
Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.
Here are my ten favourite gigs of the year. As always, they’re in chronological order.
Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.
A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.
But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.
Did you see any great shows this year? Tell us about them in the comments.
The post 2022 in Gigs appeared first on Davblog.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.
Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.
Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.
Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.
In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.
[This post was generated using ChatGPT]
The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.
‘Okay Google. Where is Antarctica?”
Children can now get answers to all their questions using smart speakers and digital voice assistants.
A few years ago, children would run to their parents or grandparents to answer their questions. But with the ascendence of voice assistants to the mainstream in recent years, many children rely more on technology than humans.
Is this a good idea?
How does it impact the children?
When children interact with people, it helps them be more thoughtful, creative, and imaginative.
When they use artificial intelligence instead, several issues come into the foreground. These include access to age-inappropriate content and increasing the possibility of being rude or unpleasant, affecting how they treat others.
As mentioned, technology has both pros and cons. There are benefits to children using these devices, including improving diction, communication, social skills, and gaining information without bothering their parents.
Many families find that smart speakers like Amazon Echo and Google Home are useful. They use them for several functions, ranging from answering questions to setting the thermostat. Research shows that up to nine out of ten children between the ages of four and eleven in the US are regularly using smart speakers — often without parental guidance and control. So, what is the best approach for a parent to take?
Children up to seven years old can find it challenging to differentiate between humans and devices, and this can lead to one of the biggest dangers. If the device fulfils their requests through rude behaviour, children may behave similarly to other humans.
Most parents consider it essential that smart devices should encourage polite conversations as a part of nurturing good habits in children. The Campaign for a Commercial-Free Childhood or CCFA is a US coalition of concerned parents, healthcare professionals, and educators. Recently, CCFA protested against Amazon Echo Dot Kids Edition, stating that it may affect children’s wellbeing. Because of this, they requested parents avoid buying Amazon Echo.
However, in reality, these smart devices have improved a lot and focus on encouraging polite conversations with children. It is all about how parents use and present these devices to their children, as these factors can influence them a lot.
But in simple terms, parents wish these devices to encourage politeness in their children. At the same time, they want their kids to understand the difference between artificial intelligence and humans while using these technological innovations.
Many parents have seen their children behave rudely to smart speakers. Several parents have expressed their concerns through social media, blog posts and forums like Mumsnet. They fear these behaviours can impact their kids when they grow up.
A report published in Child Wise reached the conclusion that children who behave rudely to smart devices might be aggressive while they grow up, especially while dealing with other humans. It is, therefore, preferable if children use polite words while interacting with both humans and smart devices.
With interventions and rising concerns addressed by parents and health professionals, some tech companies have brought changes to virtual assistants and smart speakers.
The parental control features available in Alexa focus on training kids to be more polite. Amazon brands it as Magic Word, where the focus is on bringing positive enforcement. However, there is no penalty if children don’t speak politely. Available on Amazon Echo, this tool has added features like setting bedtimes, switching off devices, and blocking songs with explicit lyrics.
When it comes to Google Home, it has brought in a new feature called Pretty Please. Here, Google will perform an action only when children use, please. For instance, “Okay, Google. Please set the timer for 15 minutes.”
You can enable this feature through the Google Family Link, where you can find the settings for Home and Assistant. You can set these new standards for devices of your preference. Also, once you use it and figure things out, there will be no more issues in setting it up again.
These tools and their approaches are highly beneficial for kids and parents. As of now, these devices only offer basic features and limited replies. But with time, there could be technological changes that encourage children to have much more efficient and polite interactions.
It was thinking about issues like this which led me to write my first children’s book — George and the Smart Home. In the book, George is a young boy who has problems getting the smart speakers in his house to do what he wants until he learns to be polite to them.
It is available now, as a paperback and a Kindle book, from Amazon.
Buy it from: AU / BR / CA / DE / ES / FR / IN / IT / JP / MX / NL / UK / US
The post Should Children be Polite While Using Smart Speakers? appeared first on Davblog.