Powered by Perlanet
Some of you might remember the lightning talk I gave at the London Perl Workshop last year (it’s available on YouTube, I’ll wait if you want to watch it). In it, I said I planned to resurrect the Perl School brand, using it to publish Perl ebooks. One book, Perl Taster, was already available and I had plans to write and publish several more. Those plans are still ongoing…
Also in the talk, I asked if anyone else wanted to write a book for the series. I offered to help out with the hard parts of getting your text into the Amazon system (it’s actually nowhere near as hard as you might think). Three people approached me later to discuss the possibility of writing something, but only one followed through with something more concrete. That was John Davies, who has been a regular attendee at London Perl Mongers for a pretty long time. At the LPW, John had helped Martin Berends to run a training course on using Selenium with Perl. As part of that help, John had written some notes on the course which had been distributed to the attendees. John wondered if those notes would be worth publishing as a Perl School ebook. I said that it was certainly worth pursuing the idea.
Over the last few months, John has expanded his original notes considerably and I’ve been doing the work needed to convert his input into an ebook. And I’m happy to say that the book was published on Amazon yesterday. It’s called Selenium and Perl and you should be able to find it on your local Kindle store. If you want to test your Perl web applications using Selenium, then I hope that you find it useful.
It’s been the first time I’ve edited someone else’s work and converted it into an ebook. I think the process has gone well (perhaps someone should ask John for his opinion!)
But I’m confident enough of the process to renew the offer I made at the LPW. If you’re interested in writing an ebook as part of the Perl School range, please get in touch and we can discuss it.
Yesterday I was at my second Brighton SEO conference. I enjoyed it every bit as much as the last one and I’m already looking forward to the next. Here are my notes about the talks I saw.
I misread the description for this. I thought it would be about clever ways to use command-line tools for SEO purposes. But, actually, it was a basic introduction to Unix command-line text processing tools for people who were previously unaware of them. I wasn’t really the target audience, but it’s always good to see a largely non-technical audience being introduced to the powerful tools that I use ever day.
A good introduction to why HTTP/2 is good news for web traffic (it’s faster) and a great trucking analogy explaining what HTTP is and how HTTP/2 improves on current systems. I would have liked more technical detail, but I realise most of the audience wouldn’t.
To be honest, I was only here because it was the last talk in the session and I didn’t have time to move elsewhere. I have never worked on a site with pages that are translated into other languages, so this was of limited interest to me. But Emily certainly seemed to know her stuff and I’m sure that people who use “hreflang” would have found it very interesting and useful.
One thing bothered me slightly about the talk. A couple of times, Emily referred to developers in slightly disparaging ways. And I realised that I’ve heard similar sentiments before at SEO events. It’s like developers are people that SEO analysts are constantly battling with to get their work done. As a developer myself (and one who has spend the last year implementing SEO fixes on one of the UK’s best-known sites) I don’t really understand this attitude – as it’s something I’ve never come across.
It’s annoyed me enough that I’m considering proposing a talk called “I Am Developer” to the next Brighton SEO in order to try to get to the bottom of this issue.
Fili is a former Google Search Quality Engineer, so he certainly knows his stuff. But this talk seemed a bit scattershot to me – it didn’t seem to have a particularly clear focus.
This was probably the talk I was looking forward to most. I’ve been dabbling in JSON-LD on a few sites recently and I’m keen to get deeper into to. Alexis didn’t disappoint – this was a great introduction to the subject and (unlike some other speakers) she wasn’t afraid to go deeper when it was justified.
Here first slide showed some JSON-LD and she asked us to spot the five errors in it. I’m disappointed to report that I only caught two of them.
This started well. A good crawling strategy is certainly important for auditing your site and ensuring that everything still works as expected. However, I was slightly put off by Sam’s insistence that a cloud-based crawling tool was an essential part of this strategy. Sam works for Deep Crawl who just happen to have a cloud-based crawling tool that they would love to sell you.
Conferences like this are at their best when the experts are sharing their knowledge with the audience without explicitly trying to sell their services. Sadly, this talk fell just on the wrong side of that line.
Then it was lunchtime and my colleagues and I retired just around the corner to eat far too much pizza that was supplied by the nice people at PI Datametrics.
This was really interesting. Rob says that featured snippets are on the rise and had some interesting statistics that will help you get your pages into a featured snippet. He then went on to explain how featured are forming the basis of Google’s Voice Search – that is, if you ask Google Home or Google Assistant a question, the reply is very likely to be the featured snippet that you’d get in response to the same query on the Google Search Engine. This makes it an ever better idea to aim at getting your content into featured snippets.
Sam Robson / Slides
Sam works for Future Publishing, on their Tech Radar site. He had some interesting war stories about dealing with Google algorithm changes and coming out the other side with a stronger site that is well-placed to capitalise on big technical keywords.
[I can’t find his slides online. I’ll update this post if I find them.]
This tied in really well with the other talks in the session. Jason has good ideas about how to get Google to trust your site more by convincing Google that you are the most credible source for information on the topics you cover. He also talked a lot about the machine learning that Google are currently using and where that might lead in the future.
I was at a bit of a loose end for the final session. Nothing really grabbed me. In the end I just stayed in the same room I’d been in for the previous session. I’m glad I did.
All too often, I’ve seen companies who don’t really know how to report effectively on how successfully (or otherwise!) their web sites are performing. And that’s usually because they don’t know what metrics are important or useful to them. Stephen had some good ideas about identifying the best metrics to track and ensuring that the right numbers are seen by the right people.
Having following Stephen’s advice and chosen the metrics that you need to track, Anna can show you how to record those metrics and how to also capture other useful information. As a good example, she mentioned a client who was an amusement park. Alongside the usual kinds of metrics, they had also been able to track the weather conditions at the time someone visited the site and had used that data to corroborate ticket sales with the weather.
Anna seemed to be a big fan of Google Tag Manager which I had previously dismissed. Perhaps I need to revisit that.
Dana DiTomaso / Slides
And once you have all of your data squirrelled away in Google Analytics, you need a good tool to turn it into compelling and useful reports. Dana showed us how we could to that with Google Data Studio – another tool I need to investigate in more detail.
[I can’t find her slides online. I’ll update this post if I find them.]
Two things struck me while watching the keynote conversation between John Mueller and Aleyda Solis. Firstly, I though that Aleyda was the wrong person to be running the session. I know that Brighton SEO tries hard not to be the usual stuffy, corporate type of conference, but I thought her over-familiar and jokey style didn’t go well in a conversation with Google’s John Mueller.
Secondly, I had a bit of an epiphany about the SEO industry. All day, I’d been watching people trying to explain how to get your site to do well in Google (other search engines are, of course, available but, honestly, who cares about them?) but they’re doing so without any real knowledge of how the Mighty God of Search really works.
Oh, sure, Google gives us tools like Google Analytics which allow us so see how well we’re doing and Google Search Console which will give us clues about ways we might be doing better. But, ultimately, this whole industry is trying to understand the inner working of a company that tells us next to nothing.
This was really obvious in the conversation with John Mueller. Pretty much every question was answered with a variation on “well, I don’t think we’d talk publicly about the details of that algorithm” or “this is controlled by a variety of factors that will change frequently, so I don’t think it’s useful to list them”.
The industry is largely stumbling about in the dark. We can apply the scientific method – we propose a hypothesis, run experiments, measure the results, adjust our hypothesis and repeat. Sometimes we might get close to a consensus on how something works. But then (and this is where SEO differs from real science) Google change their algorithms and everything we thought we knew has now changed.
Don’t get me wrong, it’s a fascinating process to watch. And, to a lesser extent, to be involved in. And there’s a lot riding on getting the results right. But in many ways, it’s all ultimately futile.
Wow, that got dark quickly! I should finish by saying that, despite what I wrote above, Brighton SEO is a great conference. If you want more people to visit your web site, you should be interested in SEO. And if you’re interested in SEO, you should be at Brighton SEO.
See you at the next one – it’s on September 28th.
There was a London Perl Mongers meeting at ZPG about ten days ago. I gave a short talk explaining why (and how) a republican like me came to be running a site about the Line of Succession to the British Throne. The meeting was great (as they always are) and I think my talk went well (I’m told the videos are imminent). The photo shows the final slide from my talk. I left it up so it was a backdrop for a number of announcements that various people gave just before the break.
In order to write my talk, I revisited the source code for my site and, in doing so, I realised that there were a couple of chunks of logic that I could (and should) carve out into separate distributions that I could put on CPAN. I’ve done that over the last couple of days and the modules are now available.
I’ve written the module as a Moo role, which means it should be usable in Moose classes too. To add JSON-LD to your class, you need to do three things:
Your class inherits two methods from the role – json_ld_data() returns the data structure which will be encoded into JSON (it’s provided in case you want to massage the data before encoding it) and json_ld() which returns the actual encoded JSON in a format that’s suitable for embedding in a web page.
One of the most satisfying parts of the Line of Succession site to write was the code that shows the relationship between a person in the line and the current sovereign. Prince Charles (currently first in line) is the son of the sovereign and Tāne Lewis (currently thirtieth in line) is the first cousin twice removed of the sovereign.
That code might be useful to other people, so it’s now on CPAN as Genealogy::Relationship. To be honest, I’m not sure exactly how useful it will be. The Line of Succession is a rather specialised version of a family tree – because we’re tracing a bloodline, we’re only interested in one parent (which is unlike normal genealogy where we’d be interested in both). It also might be too closely tied to the data model I use on my site – but I have plans to fix that soon.
Currently, because of the requirements of my site, it only goes as far as third cousins (that’s people who share the same great, great grandparents). That’s five generations. But I have an idea to build a site which shows the relationship between any two English or British monarchs back to 1066. I think that’s around forty generations – so I’ll need to expand the coverage somewhat!
But anyway, it’s there and I’d be happy if you tried it and let me know whether it works for you. The documentation should explain all you need to know.
The Line of Succession site doesn’t get much traffic yet – I really need to do more marketing for it. So it’s satisfying to know that some of the code might, at least, be useful outside of the project.