Powered by Perlanet
af007cc
🟩 Perl Diver Auth is up (200 in 507 ms) [skip ci] [upptime]
06e1ec7
🟥 Melody is down (0 in 0 ms) [skip ci] [upptime]
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company — outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations — they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet — decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute — just before it covered the star completely — it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
Originally published at https://blog.dave.org.uk on April 7, 2024.
I gave my first public talk sometime between the 22nd and 24th September 2000. It was at the first YAPC::Europe which was held in London between those dates. I can’t be any more precise because the schedule is no longer online and memory fades.
I can, however, tell you that the talk was a disaster. I originally wasn’t planning to give a talk at all, but my first book was about to be published and the publishers thought that giving a talk about it to a room full of Perl programmers would be great marketing. I guess that makes sense. But what they didn’t take into account was the fact that I knew nothing about how to give an interesting talk. So I threw together a few bullet points taken from the contents of the book and wrote a simple Perl script to turn those bullet points into HTML slides (it was 2000 – that’s what everyone did). I gave absolutely no thought to what the audience might want to know or how I could tell a story to guide them through. It was a really dull talk. I’m sorry if you were in the audience. Oh, and add the fact that I was speaking after the natural raconteur, Charlie Stross and you can probably see why I’m eternally grateful that the videos we took of the conference never saw the light of day. I left the stage knowing for sure that public speaking was not for me and vowed that I would never give another talk.
But…
We were experimenting with a session of lightning talks at the conference and I had already volunteered to give a talk about my silly module Symbol::Approx::Sub. I didn’t feel that I could back out and, anyway, it was only five minutes. How bad could it be?
As it turns out, with Symbol::Approx::Sub I had stumbled on something that was simultaneously both funny and useful (well, the techniques are useful – obviously the module itself isn’t). And I accidentally managed to tell the story of the module engagingly and entertainingly. People laughed. And they clapped enthusiastically at the end. I immediately changed my mind about never speaking in public again. This was amazing. This was as close as I was ever going to get to playing on stage at the Hammersmith Odeon. This was addictive.
But something had to change. I had to get better at it. I had to work out how to give entertaining and useful talks that were longer than five minutes long. So I studied the subject of public speaking. The Perl community already had two great public speakers in Mark Dominus and Damian Conway and I took every opportunity to watch them speak and work out what they were doing. It helped that they both ran courses on how to be a better public speaker. I also read books on the topic and when TED talks started coming online I watched the most popular ones obsessively to work out what people were doing to give such engaging talks (it turns out the answer really boils down to – taking out most of the content!)
And I practiced. I don’t think there was a conference I went to between 2000 and 2020 where I didn’t give a talk. I’d never turn down an opportunity to speak at a Perl Mongers meeting. And. while I’m certainly not Damian Conway, I like to think I got better at it. I’d get pretty good scores whenever there was a feedback form.
All of which means that I’ve given dozens of talks over the last twenty-plus years. From lightning talks to all-day (actually, a couple of two-day) training sessions. I’ve tried to be organised about keeping copies of the slides from all of the talks I’ve given, but I fear a few decks have slipped through the cracks over the years. And, of course, there are plenty of videos of me giving various talks over that time.
I’ve been thinking for a while that it would be good to gather them all together on one site. And, a couple of weeks ago. I started prodding at the project. Today, it reached the stage where it’s (just barely) useable. It’s at talks.davecross.co.uk. Currently, it’s just a list of talk titles and it only covers the last five years or so (and for a lot of that time, there were no conferences or meetings to speak at). But having something out there will hopefully encourage me to expand it in two dimensions:
The second point is going to be fun. There will be some serious data archaeology going on. I think I can dig out details of all the YAPCs and LPWs I’ve spoken at – but can I really find details of every London Perl Mongers technical meeting? And there are some really obscure things in there – I’m pretty sure I spoke at a Belgian Perl Workshop once. And what was that Italian conference held in Ferrara just before the Mediterranean Perl Whirl? There’s a lot of digging around in the obscure corners of the web (and my hard disk!) in my near future.
Wish me luck.
The post Collecting talks first appeared on Perl Hacks.
I gave my first public talk sometime between the 22nd and 24th September 2000. It was at the first YAPC::Europe which was held in London between those dates. I can’t be any more precise because the schedule is no longer online and memory fades.
I can, however, tell you that the talk was a disaster. I originally wasn’t planning to give a talk at all, but my first book was about to be published and the publishers thought that giving a talk about it to a room full of Perl programmers would be great marketing. I guess that makes sense. But what they didn’t take into account was the fact that I knew nothing about how to give an interesting talk. So I threw together a few bullet points taken from the contents of the book and wrote a simple Perl script to turn those bullet points into HTML slides (it was 2000 – that’s what everyone did). I gave absolutely no thought to what the audience might want to know or how I could tell a story to guide them through. It was a really dull talk. I’m sorry if you were in the audience. Oh, and add the fact that I was speaking after the natural raconteur, Charlie Stross and you can probably see why I’m eternally grateful that the videos we took of the conference never saw the light of day. I left the stage knowing for sure that public speaking was not for me and vowed that I would never give another talk.
But…
We were experimenting with a session of lightning talks at the conference and I had already volunteered to give a talk about my silly module Symbol::Approx::Sub. I didn’t feel that I could back out and, anyway, it was only five minutes. How bad could it be?
As it turns out, with Symbol::Approx::Sub I had stumbled on something that was simultaneously both funny and useful (well, the techniques are useful – obviously the module itself isn’t). And I accidentally managed to tell the story of the module engagingly and entertainingly. People laughed. And they clapped enthusiastically at the end. I immediately changed my mind about never speaking in public again. This was amazing. This was as close as I was ever going to get to playing on stage at the Hammersmith Odeon. This was addictive.
But something had to change. I had to get better at it. I had to work out how to give entertaining and useful talks that were longer than five minutes long. So I studied the subject of public speaking. The Perl community already had two great public speakers in Mark Dominus and Damian Conway and I took every opportunity to watch them speak and work out what they were doing. It helped that they both ran courses on how to be a better public speaker. I also read books on the topic and when TED talks started coming online I watched the most popular ones obsessively to work out what people were doing to give such engaging talks (it turns out the answer really boils down to – taking out most of the content!)
And I practiced. I don’t think there was a conference I went to between 2000 and 2020 where I didn’t give a talk. I’d never turn down an opportunity to speak at a Perl Mongers meeting. And. while I’m certainly not Damian Conway, I like to think I got better at it. I’d get pretty good scores whenever there was a feedback form.
All of which means that I’ve given dozens of talks over the last twenty-plus years. From lightning talks to all-day (actually, a couple of two-day) training sessions. I’ve tried to be organised about keeping copies of the slides from all of the talks I’ve given, but I fear a few decks have slipped through the cracks over the years. And, of course, there are plenty of videos of me giving various talks over that time.
I’ve been thinking for a while that it would be good to gather them all together on one site. And, a couple of weeks ago. I started prodding at the project. Today, it reached the stage where it’s (just barely) useable. It’s at talks.davecross.co.uk. Currently, it’s just a list of talk titles and it only covers the last five years or so (and for a lot of that time, there were no conferences or meetings to speak at). But having something out there will hopefully encourage me to expand it in two dimensions:
The second point is going to be fun. There will be some serious data archaeology going on. I think I can dig out details of all the YAPCs and LPWs I’ve spoken at – but can I really find details of every London Perl Mongers technical meeting? And there are some really obscure things in there – I’m pretty sure I spoke at a Belgian Perl Workshop once. And what was that Italian conference held in Ferrara just before the Mediterranean Perl Whirl? There’s a lot of digging around in the obscure corners of the web (and my hard disk!) in my near future.
Wish me luck.
The post Collecting talks first appeared on Perl Hacks.
Changing rooms are the same all over the galaxy and this one really played to the stereotype. The lights flickered that little bit more than you’d want them to, a sizeable proportion of the lockers wouldn’t lock and the whole room needed a good clean. It didn’t fit with the eye-watering amount of money we had all paid for the tour.
There were a dozen or so of us changing from our normal clothes into outfits that had been supplied by the tour company – outfits that were supposed to render us invisible when we reached our destination. Not invisible in the “bending light rays around you” way, they would just make us look enough like the local inhabitants that no-one would give us a second glance.
Appropriate changing room etiquette was followed. Everyone was either looking at the floor or into their locker to avoid eye contact with anyone else. People talked in lowered voices to people they had come with. People who, like me, had come alone were silent. I picked up on some of the quiet conversations – they were about the unusual flora and fauna of our location and the unique event we were here to see.
Soon, we had all changed and were ushered into a briefing room where our guide told us many things we already knew. She had slides explaining the physics behind the phenomenon and was at great pains to emphasise the uniqueness of the event. No other planet in the galaxy had been found that met all of the conditions for what we were going to see. She went through the history of tourism to this planet – decades of uncontrolled visits followed by the licensing of a small number of carefully vetted companies like the one we were travelling with.
She then turned to more practical matters. She reiterated that our outfits would allow us to pass for locals, but that we should do all we could to avoid any interactions with the natives. She also reminded us that we should only look at the event through the equipment that we would be issued with on our way down to the planet.
Through a window in the briefing room a planet, our destination, hung in space. Beyond the planet, its star could also be seen.
An hour or so later, we were on the surface of the planet. We were deposited at the top of a grassy hill on the edge of a large crowd of the planet’s inhabitants. Most of us were of the same basic body shape as the quadruped locals and, at first glance at least, passed for them. A few of us were less lucky and had to stay in the vehicles to avoid suspicion.
The timing of the event was well understood and the company had dropped us off early enough that we were able to find a good viewing spot but late enough that we didn’t have long to wait. We had been milling around for half an hour or so when a palpable moment of excitement passed through the crowd and everyone looked to the sky.
Holding the equipment I had been given to my eyes I could see what everyone else had noticed. A small bite seemed to have been taken from the bottom left of the planet’s sun. As we watched, the bite got larger and larger as the planet’s satellite moved in front of the star. The satellite appeared to be a perfect circle, but at the last minute – just before it covered the star completely – it became obvious that the edge wasn’t smooth as gaps between irregularities on the surface (mountains, I suppose) allowed just a few points of light through.
And then the satellite covered the sun and the atmosphere changed completely. The world turned dark and all conversations stopped. All of the local animals went silent. It was magical.
My mind went back to the slides explaining the phenomenon. Obviously, the planet’s satellite and star weren’t the same size, but their distance from the planet exactly balanced their difference in size so they appeared the same size in the sky. And the complex interplay of orbits meant that on rare occasions like this, the satellite would completely and exactly cover the star.
That was what we were there for. This was what was unique about this planet. No other planet in the galaxy had a star and a satellite that appeared exactly the same size in the sky. This is what made the planet the most popular tourist spot in the galaxy.
Ten minutes later, it was over. The satellite continued on its path and the star was gradually uncovered. Our guide bundled us into the transport and back up to our spaceship.
Before leaving the vicinity of the planet, our pilot found three locations in space where the satellite and the star lined up in the same way and created fake eclipses for those of us who had missed taking photos of the real one.
The post The Tourist appeared first on Davblog.
I’ve spent more than a reasonable amount of time thinking about Amazon links over the last three or four years.
It started with the Perl School web site. Obviously, I knew that the book page needed a link to Amazon – so people could buy the books if they wanted to – but that’s complicated by the fact that Amazon has so many different sites and I have no way of knowing which site is local to anyone who visits my web site. I had the same problem when I built a web site for George and the Smart Home. And again when I created a site for Will Sowman’s books. At some point soon, I’ll also want to put book pages on the Clapham Tech Press web site – and that will have exactly the same problem.
That’s the user-visible side of the equation. There are other reasons for wanting to know about all of the existing Amazon sites. One of the best ones is because I want to track royalties from the various sites and apportion them to the right authors.
On the Perl School site, I solved the problem by creating a database table which contains data about the sites that I knew about at the time. Then there’s a DBIC result class and that result set is passed to the book page template, which builds “buy” buttons for each site found in the result set. That works, but it’s not very portable. When it came to the other sites, I found myself writing a “make_buttons” program which used the Perl School database table to generate some HTML which I then copied into the relevant template.
But that never sat well with me. It made me uncomfortable that all of my book sites relied on a database table that existed in one of my repos that, really, has no connection to those other sites. I thought briefly about duplicating the table into the other repos, but that set off the “Don’t Repeat Yourself” alarm in my head, so I backed away from that idea pretty quickly.
It would be great if Amazon had an API for this information. But, unless I’m blind, it seems to be the only API that they don’t provide.
So, currently, what I’ve done is to encapsulate the data in a CPAN module. It’s called Amazon::Sites and I’ve been releasing slowly-improving versions of it over the last week or so – and it’s finally complete enough that I can use it to replace my database table. It might even make the code for my various book sites easier to maintain.
Maybe it will be useful to you too.
Here’s how you use it:
use Amazon::Sites; my $sites = Amazon::Sites->new; my @sites = $sites->sites; my %sites = $sites->sites_hash; my @codes = $sites->codes; my $site = $sites->site('UK'); say $site->currency; # GBP say $site->tldr; # co.uk # etc my %urls = $sites->asin_urls('XXXXXXX'); say $urls{UK}; # https://amazon.co.uk/dp/XXXXXXX
Once you’ve created a class of the object, you have access to a few useful methods:
The Amazon::Site object has a number of useful attributes:
Amazon::Site also has a “asin_url()” method. You pass it an ASIN (that’s the unique identifier that Amazon uses for every product on its site) and it returns the full URL of that product on that site. There’s a similar “asin_urls()” (note the “s” at the end) on the Amazon::Sites object. That returns a hash of URLs for all of the sites the object knows about. The key is the country code and the value is the URL in that country.
You can also filter the list of Amazon sites that you’re interested in when creating your Amazon::Sites object. The constructor takes optional “include” and “exclude” arguments. Each of them is a reference to an array of ISO country codes. For reasons that are, I hope, obvious, you can only use one of those options at a time.
If you’re an Amazon Associate, you can make money by including your “associate code” in Amazon URLs that you share with people. Amazon::Sites deals with that too. An Amazon associate code is associated with one Amazon site. So the constructor method has an optional “assoc_codes” argument which is a hash mapping country codes to associate codes. If you have set up associate codes in your Amazon::Sites object, then your associate code will be included in any URLs that are generated by the modules – as long as the URL is for one of the sites that you have an associate code for.
That’s all it does at the moment. It addresses most of my needs. There’s one more feature I might add soon. I’d like to have template processing built-in – so if I have a template and an Amazon::Sites object, I can easily process that template for every site that the object knows about.
So that’s the class. I hope someone out there finds it useful. If you think it’s almost useful, but there’s a feature missing then please let me know (or even send a pull request).
But there are a couple of other things I’d like to mention about how I wrote this class.
Firstly, this is written using the new perlclass OO syntax. Specifically, it uses Feature::Compat::Class, so you can use it on versions of Perl back to 5.26. It’s true that the new syntax doesn’t have all the features that you’d get with something like Moose, but I love using it – and over the next few versions of Perl, it will only get better and better. If you haven’t tried the new syntax yet, then I recommend you have a look at it.
Secondly, this is the first new CPAN distribution I’ve written since I’ve had my subscription to GitHub Copilot. And I’m really impressed at how much faster I was using Copilot. As I said, I was using experimental new Perl syntax, so I was impressed at how well Copilot understood what I was doing. I lost count of the number of times I typed the name of a new method and Copilot instantly wrote the code for me – an 95% of the time the code it wrote was spot on. AI programming support is here and it’s good. If you’re not using it yet, then you’re losing out.
I’m told a good blog post needs a “call to action”. This one has three:
The post Amazon Links and Buttons first appeared on Perl Hacks.
I’ve spent more than a reasonable amount of time thinking about Amazon links over the last three or four years.
It started with the Perl School web site. Obviously, I knew that the book page needed a link to Amazon – so people could buy the books if they wanted to – but that’s complicated by the fact that Amazon has so many different sites and I have no way of knowing which site is local to anyone who visits my web site. I had the same problem when I built a web site for George and the Smart Home. And again when I created a site for Will Sowman’s books. At some point soon, I’ll also want to put book pages on the Clapham Tech Press web site – and that will have exactly the same problem.
That’s the user-visible side of the equation. There are other reasons for wanting to know about all of the existing Amazon sites. One of the best ones is because I want to track royalties from the various sites and apportion them to the right authors.
On the Perl School site, I solved the problem by creating a database table which contains data about the sites that I knew about at the time. Then there’s a DBIC result class and that result set is passed to the book page template, which builds “buy” buttons for each site found in the result set. That works, but it’s not very portable. When it came to the other sites, I found myself writing a “make_buttons” program which used the Perl School database table to generate some HTML which I then copied into the relevant template.
But that never sat well with me. It made me uncomfortable that all of my book sites relied on a database table that existed in one of my repos that, really, has no connection to those other sites. I thought briefly about duplicating the table into the other repos, but that set off the “Don’t Repeat Yourself” alarm in my head, so I backed away from that idea pretty quickly.
It would be great if Amazon had an API for this information. But, unless I’m blind, it seems to be the only API that they don’t provide.
So, currently, what I’ve done is to encapsulate the data in a CPAN module. It’s called Amazon::Sites and I’ve been releasing slowly-improving versions of it over the last week or so – and it’s finally complete enough that I can use it to replace my database table. It might even make the code for my various book sites easier to maintain.
Maybe it will be useful to you too.
Here’s how you use it:
use Amazon::Sites;
my $sites = Amazon::Sites->new;
my @sites = $sites->sites;
my %sites = $sites->sites_hash;
my @codes = $sites->codes;
my $site = $sites->site('UK');
say $site->currency; # GBP
say $site->tldr; # co.uk
# etc
my %urls = $sites->asin_urls('XXXXXXX');
say $urls{UK}; # https://amazon.co.uk/dp/XXXXXXX
Once you’ve created a class of the object, you have access to a few useful methods:
The Amazon::Site object has a number of useful attributes:
Amazon::Site also has a “asin_url()” method. You pass it an ASIN (that’s the unique identifier that Amazon uses for every product on its site) and it returns the full URL of that product on that site. There’s a similar “asin_urls()” (note the “s” at the end) on the Amazon::Sites object. That returns a hash of URLs for all of the sites the object knows about. The key is the country code and the value is the URL in that country.
You can also filter the list of Amazon sites that you’re interested in when creating your Amazon::Sites object. The constructor takes optional “include” and “exclude” arguments. Each of them is a reference to an array of ISO country codes. For reasons that are, I hope, obvious, you can only use one of those options at a time.
If you’re an Amazon Associate, you can make money by including your “associate code” in Amazon URLs that you share with people. Amazon::Sites deals with that too. An Amazon associate code is associated with one Amazon site. So the constructor method has an optional “assoc_codes” argument which is a hash mapping country codes to associate codes. If you have set up associate codes in your Amazon::Sites object, then your associate code will be included in any URLs that are generated by the modules – as long as the URL is for one of the sites that you have an associate code for.
That’s all it does at the moment. It addresses most of my needs. There’s one more feature I might add soon. I’d like to have template processing built-in – so if I have a template and an Amazon::Sites object, I can easily process that template for every site that the object knows about.
So that’s the class. I hope someone out there finds it useful. If you think it’s almost useful, but there’s a feature missing then please let me know (or even send a pull request).
But there are a couple of other things I’d like to mention about how I wrote this class.
Firstly, this is written using the new perlclass OO syntax. Specifically, it uses Feature::Compat::Class, so you can use it on versions of Perl back to 5.26. It’s true that the new syntax doesn’t have all the features that you’d get with something like Moose, but I love using it – and over the next few versions of Perl, it will only get better and better. If you haven’t tried the new syntax yet, then I recommend you have a look at it.
Secondly, this is the first new CPAN distribution I’ve written since I’ve had my subscription to GitHub Copilot. And I’m really impressed at how much faster I was using Copilot. As I said, I was using experimental new Perl syntax, so I was impressed at how well Copilot understood what I was doing. I lost count of the number of times I typed the name of a new method and Copilot instantly wrote the code for me – an 95% of the time the code it wrote was spot on. AI programming support is here and it’s good. If you’re not using it yet, then you’re losing out.
I’m told a good blog post needs a “call to action”. This one has three:
The post Amazon Links and Buttons appeared first on Perl Hacks.
I can’t be the only programmer who does this. You’re looking for an online service to fill some need in your life. You look at three or four competing products and they all get close but none of them do everything you want. Or maybe they do tick all the boxes but they cost that little bit more than you’re comfortable paying. After spending a few hours on your search that little voice pops up in your head with that phrase that you really don’t want to hear:
Maybe you should just write your own version. How hard can it be?
A couple of hours later, you have something that vaguely works, you’ve learned more than you thought there was to learn about some obscure corner of life and you’re the proud owner of another new domain.
Please tell me it’s not just me.
So today I’ve been working on my Linktree clone.
Honestly, I can’t remember what it was about Linktree or its existing clones that I didn’t like. I suspect it’s that I just wanted more control over my links page than a hosted service would give me. All I can be sure of is that in September 2022 I made the first commit to a project that, eighteen months later, I’m still maintaining and improving.
To be fair to myself, I didn’t buy a new domain. That means I’m getting better, right? The output is hosted at links.davecross.co.uk. I’m not even paying for hosting as it’s all hosted on GitHub Pages – it’s a static site that has occasional changes, so it’s perfect for GitHub Pages.
But I have spent quite a lot of time working on the code. Probably more than is reasonable for a web site that gets a dozen visits in a good month. Work on it seems to come in waves. I’ll go for months without touching it, and then I’ll spend a week or so working on it pretty much every day. Over the last 24 hours or so, I’ve passed an important milestone. Like all of these little side projects, this one started out as a largely unstructured code dump – as I worked to get it doing something that approximated the original goal. Then I’ll spend some time (months, usually) where fixes and improvements are implemented by hacking on the original horrible code. At some point. I’ll realise that I’m making things too difficult for myself and I’ll rewrite it (largely from scratch) to be better structured and easier to maintain. That’s where I got to today. The original single-file code dump has been rewritten into something that’s far nicer to work on. And as a side benefit, I’ve rewritten it all using Perl’s new, built-in object orientation features – which I’m loving.
Oh, and I guess that’s the upside of having little side projects like this – I get to try out new features like the new OO stuff in a no-pressure environment. And just spending time doing more programming has to make you a better programmer, right? And surely it’s just a matter of time before one of these projects takes off and turns me into a millionaire! I’m not saying for a minute that having pointless side projects is a bad idea. I’m just wondering how many pointless side projects are too many
So, that’s my guilty secret – I’m a serial writer of code that doesn’t really need to be written. What about you? How many pointless side projects do you have? And how much of your spare time do they use up?
The post Pointless personal side projects first appeared on Perl Hacks.
I can’t be the only programmer who does this. You’re looking for an online service to fill some need in your life. You look at three or four competing products and they all get close but none of them do everything you want. Or maybe they do tick all the boxes but they cost that little bit more than you’re comfortable paying. After spending a few hours on your search that little voice pops up in your head with that phrase that you really don’t want to hear:
Maybe you should just write your own version. How hard can it be?
A couple of hours later, you have something that vaguely works, you’ve learned more than you thought there was to learn about some obscure corner of life and you’re the proud owner of another new domain.
Please tell me it’s not just me.
So today I’ve been working on my Linktree clone.
Honestly, I can’t remember what it was about Linktree or its existing clones that I didn’t like. I suspect it’s that I just wanted more control over my links page than a hosted service would give me. All I can be sure of is that in September 2022 I made the first commit to a project that, eighteen months later, I’m still maintaining and improving.
To be fair to myself, I didn’t buy a new domain. That means I’m getting better, right? The output is hosted at links.davecross.co.uk. I’m not even paying for hosting as it’s all hosted on GitHub Pages – it’s a static site that has occasional changes, so it’s perfect for GitHub Pages.
But I have spent quite a lot of time working on the code. Probably more than is reasonable for a web site that gets a dozen visits in a good month. Work on it seems to come in waves. I’ll go for months without touching it, and then I’ll spend a week or so working on it pretty much every day. Over the last 24 hours or so, I’ve passed an important milestone. Like all of these little side projects, this one started out as a largely unstructured code dump – as I worked to get it doing something that approximated the original goal. Then I’ll spend some time (months, usually) where fixes and improvements are implemented by hacking on the original horrible code. At some point. I’ll realise that I’m making things too difficult for myself and I’ll rewrite it (largely from scratch) to be better structured and easier to maintain. That’s where I got to today. The original single-file code dump has been rewritten into something that’s far nicer to work on. And as a side benefit, I’ve rewritten it all using Perl’s new, built-in object orientation features – which I’m loving.
Oh, and I guess that’s the upside of having little side projects like this – I get to try out new features like the new OO stuff in a no-pressure environment. And just spending time doing more programming has to make you a better programmer, right? And surely it’s just a matter of time before one of these projects takes off and turns me into a millionaire! I’m not saying for a minute that having pointless side projects is a bad idea. I’m just wondering how many pointless side projects are too many
So, that’s my guilty secret – I’m a serial writer of code that doesn’t really need to be written. What about you? How many pointless side projects do you have? And how much of your spare time do they use up?
The post Pointless personal side projects appeared first on Perl Hacks.
The future is already here – it’s just not very evenly distributed
– William Gibson
The quotation above was used by Tim O’Reilly a lot around the time that Web 2.0 got going. Over recent months, I’ve had a few experiences that have made it clear to me that even the present isn’t particularly evenly distributed either. It’s always easy to find people still using technologies that we would consider archaic (and not in a rustic or hipster way).
We’ve known for twenty years that CGI is a bad idea. It’s almost ten years since CGI.pm was removed from Perl core. Surely, all of us are using something modern for web development these days.
Well, apparently not. CGI is alive and well and living on the fringes of the Perl community. I’ve come across it being used in some quite surprising places over the last year or so. I’m going to obfuscate some details in the following descriptions to, hopefully, prevent you (or, worse, the people involved) from recognising the companies involved.
None of this should be taken as an argument that the nms project was wrong to use CGI.pm or that the Perl 5 Porters were wrong to remove it from the Perl standard library. I still support both decisions. I just found it a bit jarring to be reminded that while we’re all using PSGI or Mojolicious to write microservices in Perl that serve REST APIs that are developed and deployed in Docker containers, there are still people out there who are struggling to FTP code that was written in 1997 onto low-end shared hosting.
I think this state of affairs has two causes. Firstly (like the first client I mentioned above) some systems were set up when CGI was still in common use – and things haven’t changed since. These people get a sudden shock when they are forced to move to a more modern server for some reason. And then there are people like my Fiverr clients who install Perl CGI programs because that’s what they have always done and they don’t know that there is an alternative approach. Part of the problem there is, presumably, that Perl has meant badly-written CGI programs for a large proportion of the web’s existence and means anyone searching for information on this subject is likely to find pages and pages of advice telling them how to install CGI programs before they discover anything about PSGI or Docker. And I think there might be a solution to that problem (or, at least, a way to nudge the web in the right direction).
Over last weekend I was cataloguing subdomains (I know how to have fun!) and I found a web site that I had forgotten about. I had obviously been contemplating a very similar situation back in 2016.
The site is called Perl Web Advice. The intention was (is?) that it would be a definitive source of good advice about how to develop and deploy web applications written in Perl. I had only made tiny inroads into the task before something else apparently seemed more fun and the project was abandoned.
But there’s the start of a framework for the site. And, this week, I’ve given it a GitHub Actions workflow so it gets republished automatically whenever changes are pushed to the repo. I’ve even set up a Dockerfile to make it easy to use the static site generator that I’ve used for it. So perhaps the idea has merit. Once there’s a bit more useful content there I could see if I can remember any of my SEO knowledge and get it appearing in results where people are looking for advice on this topic.
I would, of course, be happy to consider contributions from other people. What do you think? Would you like to help me save people from the hell of CGI deployments?
The post The present isn’t evenly distributed either first appeared on Perl Hacks.
The future is already here – it’s just not very evenly distributed
– William Gibson
The quotation above was used by Tim O’Reilly a lot around the time that Web 2.0 got going. Over recent months, I’ve had a few experiences that have made it clear to me that even the present isn’t particularly evenly distributed either. It’s always easy to find people still using technologies that we would consider archaic (and not in a rustic or hipster way).
We’ve known for twenty years that CGI is a bad idea. It’s almost ten years since CGI.pm was removed from Perl core. Surely, all of us are using something modern for web development these days.
Well, apparently not. CGI is alive and well and living on the fringes of the Perl community. I’ve come across it being used in some quite surprising places over the last year or so. I’m going to obfuscate some details in the following descriptions to, hopefully, prevent you (or, worse, the people involved) from recognising the companies involved.
None of this should be taken as an argument that the nms project was wrong to use CGI.pm or that the Perl 5 Porters were wrong to remove it from the Perl standard library. I still support both decisions. I just found it a bit jarring to be reminded that while we’re all using PSGI or Mojolicious to write microservices in Perl that serve REST APIs that are developed and deployed in Docker containers, there are still people out there who are struggling to FTP code that was written in 1997 onto low-end shared hosting.
I think this state of affairs has two causes. Firstly (like the first client I mentioned above) some systems were set up when CGI was still in common use – and things haven’t changed since. These people get a sudden shock when they are forced to move to a more modern server for some reason. And then there are people like my Fiverr clients who install Perl CGI programs because that’s what they have always done and they don’t know that there is an alternative approach. Part of the problem there is, presumably, that Perl has meant badly-written CGI programs for a large proportion of the web’s existence and means anyone searching for information on this subject is likely to find pages and pages of advice telling them how to install CGI programs before they discover anything about PSGI or Docker. And I think there might be a solution to that problem (or, at least, a way to nudge the web in the right direction).
Over last weekend I was cataloguing subdomains (I know how to have fun!) and I found a web site that I had forgotten about. I had obviously been contemplating a very similar situation back in 2016.
The site is called Perl Web Advice. The intention was (is?) that it would be a definitive source of good advice about how to develop and deploy web applications written in Perl. I had only made tiny inroads into the task before something else apparently seemed more fun and the project was abandoned.
But there’s the start of a framework for the site. And, this week, I’ve given it a GitHub Actions workflow so it gets republished automatically whenever changes are pushed to the repo. I’ve even set up a Dockerfile to make it easy to use the static site generator that I’ve used for it. So perhaps the idea has merit. Once there’s a bit more useful content there I could see if I can remember any of my SEO knowledge and get it appearing in results where people are looking for advice on this topic.
I would, of course, be happy to consider contributions from other people. What do you think? Would you like to help me save people from the hell of CGI deployments?
The post The present isn’t evenly distributed either appeared first on Perl Hacks.
You might remember that I’ve been taking an interest in GitHub Actions for the last year or so (I even wrote a book on the subject). And at the Perl Conference in Toronto last summer I gave a talk called “GitHub Actions for Perl Development” (here are the slides and the video).
During that talk, I mentioned a project I was working on to produce a set of reusable workflows that would make it easier for anyone to start using GitHub Actions in their Perl development. Although (as I said in the talk) things were moving pretty quickly on the project at the time, once I got back to London, several other things became more important and work on this project pretty much stalled. But over the last couple of weeks, I’ve returned to this project and I’ve finally got some of the workflows into a state where I’ve been using them successfully in my GitHub repos and I think they’re now ready for you to start using them in yours. There are three workflows that I’d like you to try:
And using these workflows in your GitHub repos is as simple as creating a new file in the .github/workflows directory which contains something like this:
name: CI on: push: branches: [ master ] pull_request: branches: [ master ] workflow_dispatch: jobs: build: uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main coverage: uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-coverage.yml@main perlcritic: uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main
There are a couple of parameters you can use to change the behaviour of these workflows. In the Toronto talk, I introduced the idea of a matrix of tests, where you can test against three operating systems (Linux, MacOS and Windows) and a list of Perl versions. By default, the cpan-test workflow uses all three operating systems and all production versions of Perl from 5.24 to 5.38. But you can change that by using the perl_version and os parameters. For example, if you only wanted to test on Ubuntu, using the most recent two versions of Perl, you could use this:
build: uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main with: perl_version: "['5.36', '5.38']" os: "['ubuntu']"
Annoyingly, the parameters to a reusable workflow can only be a single scalar value. That’s why we have to use a JSON-encoded string representing an array of values. Maybe this will get better in the future.
The cpan-perlcritic workflow also has a parameter. You can use level to change the level that perlcritic runs at. The default is 5 (the gentlest level) but if you were feeling particularly masochistic, you could do this:
perlcritic: uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main with: level: 1
The workflows are, of course, available on GitHub. It would be great to have some people trying them out and reporting back on their experiences. Raising issues and sending pull requests is very much encouraged.
Please let me know how you get on with them.
The post GitHub Actions for Perl Development first appeared on Perl Hacks.
You might remember that I’ve been taking an interest in GitHub Actions for the last year or so (I even wrote a book on the subject). And at the Perl Conference in Toronto last summer I gave a talk called “GitHub Actions for Perl Development” (here are the slides and the video).
During that talk, I mentioned a project I was working on to produce a set of reusable workflows that would make it easier for anyone to start using GitHub Actions in their Perl development. Although (as I said in the talk) things were moving pretty quickly on the project at the time, once I got back to London, several other things became more important and work on this project pretty much stalled. But over the last couple of weeks, I’ve returned to this project and I’ve finally got some of the workflows into a state where I’ve been using them successfully in my GitHub repos and I think they’re now ready for you to start using them in yours. There are three workflows that I’d like you to try:
And using these workflows in your GitHub repos is as simple as creating a new file in the .github/workflows directory which contains something like this:
name: CI
on:
push:
branches: [master]
pull_request:
branches: [master]
workflow_dispatch:
jobs:
build:
uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main
coverage:
uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-coverage.yml@main
perlcritic:
uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main
There are a couple of parameters you can use to change the behaviour of these workflows. In the Toronto talk, I introduced the idea of a matrix of tests, where you can test against three operating systems (Linux, MacOS and Windows) and a list of Perl versions. By default, the cpan-test workflow uses all three operating systems and all production versions of Perl from 5.24 to 5.38. But you can change that by using the perl_version and os parameters. For example, if you only wanted to test on Ubuntu, using the most recent two versions of Perl, you could use this:
build:
uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-test.yml@main
with:
perl_version: "['5.36', '5.38']"
os: "['ubuntu']"
Annoyingly, the parameters to a reusable workflow can only be a single scalar value. That’s why we have to use a JSON-encoded string representing an array of values. Maybe this will get better in the future.
The cpan-perlcritic workflow also has a parameter. You can use level to change the level that perlcritic runs at. The default is 5 (the gentlest level) but if you were feeling particularly masochistic, you could do this:
perlcritic:
uses: PerlToolsTeam/github_workflows/.github/workflows/cpan-perlcritic.yml@main
with:
level: 1
The workflows are, of course, available on GitHub. It would be great to have some people trying them out and reporting back on their experiences. Raising issues and sending pull requests is very much encouraged.
Please let me know how you get on with them.
The post GitHub Actions for Perl Development appeared first on Perl Hacks.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022–12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year — in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year — They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
Originally published at https://blog.dave.org.uk on December 31, 2023.
I really thought that 2023 would be the year I got back into the swing of seeing gigs. But, somehow I ended up seeing even fewer than I did in 2022 – 12, when I saw 16 the previous year. Sometimes, I look at Martin’s monthly gig round-ups and wonder what I’m doing with my life!
I normally list my ten favourite gigs of the year, but it would be rude to miss just two gigs from the list, so here are all twelve gigs I saw this year – in, as always, chronological order.
So, what’s going to happen in 2024. I wonder if I’ll get back into the habit of going to more shows. I only have a ticket for one gig next year – They Might Be Giants playing Flood in November (a show that was postponed from this year). I guess we’ll see. Tune in this time next year to see what happened.
The post 2023 in Gigs appeared first on Davblog.
Her Majesty has, of course, seen changes in many areas of society in the seventy years of her reign. But here, we’re most interested in the line of succession. So we thought it would be interesting to look at the line of succession on the day that she took the throne and see what had happened to the people who were at the top of the line of succession on that day. It’s a very different list to today’s.
I think that’s an interesting list for a few reasons:
So what do you think? Was the 1952 list a surprise to you? Did you expect it to be as different as it is from the current list?
Originally published at https://blog.lineofsuccession.co.uk on February 7, 2022.
Seventy Years of Change — Line of Succession Blog was originally published in Line of Succession on Medium, where people are continuing the conversation by highlighting and responding to this story.
Yesterday’s coronation showed Britain doing what Britain does best — putting on the most gloriously bonkers ceremony the world has seen…
Rather later than usual (again!) here is my review of the best ten gigs I saw in 2022. For the first time since 2019, I did actually see more than ten gigs in 2022 although my total of sixteen falls well short of my pre-pandemic years.
Here are my ten favourite gigs of the year. As always, they’re in chronological order.
Not everything could make the top ten though. I think was the first year that I saw Stealing Sheep and they didn’t make the list (their stage shows just get weirder and weirder and the Moth Club wasn’t a great venue for it) and I was astonished to find myself slightly bored at the Nine Inch Nails show at Brixton Academy.
A few shows sit just outside of the top ten – St. Vincent at the Eventim Apollo, John Grant at the Shepherd’s Bush Empire and Damon Albarn at the Barbican spring to mind.
But, all in all, it was a good year for live music and I’m looking forward to seeing more than sixteen shows this year.
Did you see any great shows this year? Tell us about them in the comments.
The post 2022 in Gigs appeared first on Davblog.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacking originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.
Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.
Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.
Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.
In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.
[This post was generated using ChatGPT]
Originally published at https://blog.dave.org.uk on December 11, 2022.
Using artificial intelligence (AI) to generate blog posts can be bad for search engine optimization (SEO) for several reasons.
First and foremost, AI-generated content is often low quality and lacks the depth and substance that search engines look for when ranking content. Because AI algorithms are not capable of understanding the nuances and complexities of human language, the content they produce is often generic, repetitive, and lacks originality. This can make it difficult for search engines to understand the context and relevance of the content, which can negatively impact its ranking.
Additionally, AI-generated content is often not well-written or structured, which can make it difficult for readers to understand and engage with. This can lead to a high bounce rate (the percentage of visitors who leave a website after only viewing one page), which can also hurt the website’s ranking.
Furthermore, AI-generated content is often not aligned with the website’s overall content strategy and goals. Because AI algorithms are not capable of understanding the website’s target audience, brand voice, and core messaging, the content they produce may not be relevant or useful to the website’s visitors. This can lead to a poor user experience, which can also hurt the website’s ranking.
Another issue with AI-generated content is that it can be seen as spammy or low quality by both search engines and readers. Because AI-generated content is often produced in large quantities and lacks originality, it can be seen as an attempt to manipulate search engine rankings or trick readers into engaging with the website. This can lead to the website being penalized by search engines or losing the trust and loyalty of its visitors.
In conclusion, using AI to generate blog posts can be bad for SEO for several reasons. AI-generated content is often low quality, poorly written, and not aligned with the website’s content strategy. It can also be seen as spammy or low quality by both search engines and readers, which can hurt the website’s ranking and reputation. It is important for websites to prioritize creating high-quality, original, and relevant content to improve their SEO and provide a positive user experience.
[This post was generated using ChatGPT]
The post 5 Reasons Why Using AI to Generate Blog Posts Can Destroy Your SEO appeared first on Davblog.
‘Okay Google. Where is Antarctica?”
Children can now get answers to all their questions using smart speakers and digital voice assistants.
A few years ago, children would run to their parents or grandparents to answer their questions. But with the ascendence of voice assistants to the mainstream in recent years, many children rely more on technology than humans.
Is this a good idea?
How does it impact the children?
When children interact with people, it helps them be more thoughtful, creative, and imaginative.
When they use artificial intelligence instead, several issues come into the foreground. These include access to age-inappropriate content and increasing the possibility of being rude or unpleasant, affecting how they treat others.
As mentioned, technology has both pros and cons. There are benefits to children using these devices, including improving diction, communication, social skills, and gaining information without bothering their parents.
Many families find that smart speakers like Amazon Echo and Google Home are useful. They use them for several functions, ranging from answering questions to setting the thermostat. Research shows that up to nine out of ten children between the ages of four and eleven in the US are regularly using smart speakers — often without parental guidance and control. So, what is the best approach for a parent to take?
Children up to seven years old can find it challenging to differentiate between humans and devices, and this can lead to one of the biggest dangers. If the device fulfils their requests through rude behaviour, children may behave similarly to other humans.
Most parents consider it essential that smart devices should encourage polite conversations as a part of nurturing good habits in children. The Campaign for a Commercial-Free Childhood or CCFA is a US coalition of concerned parents, healthcare professionals, and educators. Recently, CCFA protested against Amazon Echo Dot Kids Edition, stating that it may affect children’s wellbeing. Because of this, they requested parents avoid buying Amazon Echo.
However, in reality, these smart devices have improved a lot and focus on encouraging polite conversations with children. It is all about how parents use and present these devices to their children, as these factors can influence them a lot.
But in simple terms, parents wish these devices to encourage politeness in their children. At the same time, they want their kids to understand the difference between artificial intelligence and humans while using these technological innovations.
Many parents have seen their children behave rudely to smart speakers. Several parents have expressed their concerns through social media, blog posts and forums like Mumsnet. They fear these behaviours can impact their kids when they grow up.
A report published in Child Wise reached the conclusion that children who behave rudely to smart devices might be aggressive while they grow up, especially while dealing with other humans. It is, therefore, preferable if children use polite words while interacting with both humans and smart devices.
With interventions and rising concerns addressed by parents and health professionals, some tech companies have brought changes to virtual assistants and smart speakers.
The parental control features available in Alexa focus on training kids to be more polite. Amazon brands it as Magic Word, where the focus is on bringing positive enforcement. However, there is no penalty if children don’t speak politely. Available on Amazon Echo, this tool has added features like setting bedtimes, switching off devices, and blocking songs with explicit lyrics.
When it comes to Google Home, it has brought in a new feature called Pretty Please. Here, Google will perform an action only when children use, please. For instance, “Okay, Google. Please set the timer for 15 minutes.”
You can enable this feature through the Google Family Link, where you can find the settings for Home and Assistant. You can set these new standards for devices of your preference. Also, once you use it and figure things out, there will be no more issues in setting it up again.
These tools and their approaches are highly beneficial for kids and parents. As of now, these devices only offer basic features and limited replies. But with time, there could be technological changes that encourage children to have much more efficient and polite interactions.
It was thinking about issues like this which led me to write my first children’s book — George and the Smart Home. In the book, George is a young boy who has problems getting the smart speakers in his house to do what he wants until he learns to be polite to them.
It is available now, as a paperback and a Kindle book, from Amazon.
Buy it from: AU / BR / CA / DE / ES / FR / IN / IT / JP / MX / NL / UK / US
The post Should Children be Polite While Using Smart Speakers? appeared first on Davblog.