Powered by Perlanet
(The image above was the first result I got when searching Google Images for a CC-licensed image for “professional programmer”.)
Two weeks ago, I wrote about the SEO workshop I’m running on Tuesday morning just before The Perl Conference in Glasgow this August. Today, I’d like to give a few more details about the other workshop I’m running that day. After lunch, I’m running a workshop called “The Professional Programmer”. What’s that about?
I came into programming through what was a very traditional route. I did a degree in Computer Studies which I finished in 1988. And for the last thirty years I’ve been working as a programmer for a number of different companies from tiny start-ups to huge multi-nationals.
But more and more, I’m working with people who didn’t come through the same route. It’s very common that I’ll be working with people who don’t have a degree. And it’s rare that I’ll work with someone who’s been in the industry as long as I have (for I am an Old Man). I’m not saying for a second that those people aren’t just as capable of doing the job as I am. But I am saying that I know stuff that some of those people won’t have worked out yet.
This certainly isn’t going to be me telling you stuff that I learned on my degree. To be honest, I can’t think of much on my degree that I’ve used in my career. On my degree course, SQL was introduced as a cutting-edge technology (one lecturer even described it as a reporting tool that could be used by end-users!) We also did classes on COBOL and Assembler. No, there’s very little there that would be of much interest to people working in the modern software industry.
A few days ago, I started to sketch out some of the things I might want to talk about. I think the plan is going to be that we start with some of the technologies that sit alongside the programming that we all do every day and slowly move away from hard tech into the fluffier areas of the industry that we work in. Here are some of the topics I hope to cover.
Ok, we all have a programming language or two under our belts. But what else do we need to know?
How well do you know the operating systems that you work on? What, for example, is the most obscure Unix tool that you know? At what level do you understand the networking features that your code almost certainly makes use of? Can you debug network connectivity problems? To what level of detail do your really know the HTTP request-response cycle?
What data storage systems do you use? How well do you know SQL? Do you use No SQL systems as part of your technology stack? If not, could you? Do you cache things at the right level in your application? Should you be caching more things? Do you have a CDN? Do you know what a CDN is and what it does for you?
Are you an expert in the tools that you use every day? I don’t care if you prefer vi or emacs (or, I suppose, anything else), but are you an expert in using your editor? I’m happy to admit this is one area where I fall short. I bounce between many different editors and I’ve never really become an expert in any of them.
Are you the person in your team that people come to with git questions? Or do you just know half a dozen commands that seem to do approximately the right thing most of the time? Your source code control system is a vital part of your workflow. Get to know it well.
How well do you know your continuous integration environment? Do you know which buttons to press to get a release built? Or are you the person who is constantly tweaking and improving the Jenkins jobs that power the release process? And what underlies your release process? Are you building RPMs or some other type of package or do you build a new Docker container and deploy that in the cloud? How well do you know the cloud provider that you’re using? Are there new AWS features that could replace parts of your existing infrastructure? (The answer to that question is always yes.)
How good are your tests? What’s your unit test coverage? How many different types of automated testing does your system use? Do you know the difference between unit tests and integration tests? What tools are you using for automated testing? How well do you know how to use them? Is there something better out there?
What level are you involved in architectural decisions? How do you decide on a design for your application? Are you using largely procedural code or does your system make good use of classes? Is it possible for a system to be too object-oriented? How do you know when you’ve crossed that boundary?
How is your knowledge of design patterns? Do you know what a factory class is? Do you know why you would use one? Have you ever written one? Do you have an opinion on MVC designs? What is good and bad about the frameworks that you use? What would you like to do differently?
Are you maintaining a monolithic codebase from fifteen years ago? Do you have a plan to modernise your code? Have you implemented any microservices yet? How do you go about replacing small parts of a monolith with microservices? What are the advantages of a microservices architecture?
Is your team using an agile software development methodology? Is it Scrum, Kanban, XP or do you just cherry-pick bits from all of them? Is your team really agile or do you just pay lip service to agile techniques? Are you self-managing? How accurate are your estimates? Can you improve that? How well do you know the Agile Manifesto? To what extent do you agree with it?
What does your company do? What does success look like? How does what you do contribute to that success? How well do you understand the business? Do you have suggestions for improving the business outside of your team?
Do you understand the environment that the company operates in? What do you know about the economic pressures on the company? Is the company publicly or privately owned? Do you have shares in the company? Do you know what they are worth?
What level are you currently at? Do you know what you need to do in order to progress in the company? Do you have a plan to achieve that? Do you have a mentor inside the company who can help you come up with that plan? Will the company give you budget for training and personal development?
Do you need to communicate with business people inside the company? How good is your written and spoken English? Do you know how to use apostrophes? Do you need to give presentations to people in the company? How comfortable are you with public speaking? Can you get better at that?
How well-known are you outside of the company? Can you blog about your technical expertise? (You probably need to be careful if you’re blogging about stuff you do at work.) Do you speak at conferences? Should you start speaking at conferences?
As you can see, when I start writing this stuff down, it can easily all get a bit “stream of consciousness”. Hopefully in the five weeks between now and the workshop, I can tie it down and impose a little more structure on it.
But not too much structure. I’d like to keep this pretty loose. I want the workshop to be very much a two-way discussion.
I hope that sounds interesting to some of you. The workshop will be in the afternoon of Tuesday 14th August. To attend any of the workshops, you’ll need to buy an extra ticket. Tickets for either of my half-day workshops are £75.
I hope to see some of you there. Please let me know in the comments if you have any questions about this workshop.
I thought it might be interesting to talk about some of the topics I’ll be covering at my workshops at The Perl Conference in Glasgow in August. Today I’ll be talking about the Web Site Tune-Up workshop and in my next post, I’ll cover The Professional Programmer.
And I thought it would be most useful to show you a case study of where I’ve done some work to tune up a web site. So here’s the story of some work I’ve done on the web site for The Perl Conference in Glasgow itself.
When the site first went live, I noticed that it didn’t have any Open Graph tags. Open Graph tags are a series of HTML elements that you can add to theof a web page which tell sites like Facebook interesting things about the page. Most usefully, they can tell Facebook and Twitter which image and text to use when someone shares a link to your site. Obviously, we all want people to share our URLs as widely as possible and having a nice image and a useful description show up when the site is shared is a good way to encourage more sharing (and as I’ve been typing that, I’ve just realised that actually having obvious sharing buttons on the page is another good idea – more work to do there!)
So I needed to find the right place to add these tags. Most web sites are generated by a content management system. Most Perl conference sites us A Conference Toolkit, So I just needed to look through the conference repo to find the template that generates the header of the page and edit that. Here’s the first commit I made, which just added hard-coded values for the tags. With this in place, the tags looked like this:
<meta property="og:title" content="The Perl Conference - Glasgow 2018" /> <meta property="og:type" content="website" /> <meta property="og:url" content="http://act.perlconference.org/tpc-2018-glasgow/" /> <meta property="og:image" content="http://act.perlconference.org/tpc-2018-glasgow/css/logos/lrg-conf-logo.png" />
That was an improvement, but there were a few problems. Firstly, it was missing a couple of tags that Twitter likes to use, so I added those in this commit. Then I noticed I had forgotten the description (which prevented Twitter from parsing the data correctly). This commit fixed that.
And there it sat until quite recently. But last weekend I decided I needed to fix those hard-coded values. I noticed the problem when I shared a link to the workshops page on Facebook and my post contained information about the home page.
This took a bit more digging. I had to understand a little more about the internals of ACT. But over a series of small commits last weekend, I got it working as I wanted. Actually, not quite as I wanted – the Wiki URLs are still not working properly, I’ll get back to those later on. I also want to change the description on every page – but I’m not sure if that’s possible in ACT.
This weekend we published an initial version of the schedule for the conference – one that only covers the workshops (as those are the only talks with firm dates yet). Initially, it didn’t look very nice as the standard ACT template for the schedule page shows unscheduled talks before scheduled ones. That didn’t make much sense to me as there is a huge list of unscheduled talks and it seemed unlikely that anyone would ever scroll past that to find the scheduled talks. You should also bear in mind that Google is the most important visitor to your page and Google assumes that the most important content on your page comes first. So changing the order is likely to give us an SEO boost.
So I wanted to find a way to fix that. And that turned out to be harder than expected. It turns out that ACT is built on layers of templates. If your ACT instance doesn’t have a particular template, then the default one is used. And it looks like most people just use the default schedule template. But once I had copied that template into our repository, I was free to edit it any way I wanted. I started by doing the re-ordering that I mentioned above. Then I started to consider other options.
Firstly, the default formatting of the schedule on a ACT site is a little ugly. But I knew that the TPC Glasgow site was built using Bootstrap. So I knew that I could use Bootstrap’s table classes to make the schedule table look a little nicer. That was just a case of adding some classes to the template that generates the table (and, actually, removing quite a lot of unnecessary classes and presentation mark-up – removing presentation mark-up is another good tip for SEO).
Finally, I wanted to change the order of the data in each cell of the presentation table. Remember when I said above that the most important data should come first? Well, if you’re presenting data about a conference talk, what’s the most important piece of information? The default template showed the speaker’s name before the title of the talk. I wanted to reverse that (I also wanted to split the data across several lines). It turned out that this mark-up was in another template which contained a number of “utility” macros that I had to copy into our repo. But once I had done that, it was simple to make the changes I wanted. The current version of the schedule layout is in the image at the top of this post. I hope you agree it looks nicer that the old version.
So that’s where I’ve got to. There are a few other fixes I’d like to make:
But that’s a pretty good example of the kinds of things I’ll be talking about in my Web Site Tune-Up workshop. To summarise what I’ve done:
This only touches on the kind of information I’ll be covering in the workshop. There will be dozens more practical tips you can use to improve Google’s understanding of your web site.
The half-day workshop takes place on Tuesday 14th August from 09:30-13:00. Tickets are available when you book your ticket for the main conference and cost £75.
Hope to see you there.
When I signed up for my Monzo bank account last year, one of the things that really excited me was the API they made available. Of course, as is so often the way with these things, my time was taken up with other things and I never really got any further than installing the Perl module that wrapped the API.
The problem is that writing code against an API takes too long. Oh, it’s generally not particularly difficult, but there’s always something that’s more complicated than you think it’s going to be.
So I was really interested to read last week that Monzo now works with IFTTT. IFTTT (“If This Then That”) is a service which removes the complexity from API programming. You basically plug services together to do something useful. I’ve dabbled with IFTTT before. I have “applets” which automatically post my Instagram photos to Twitter, change my phone’s wallpaper to NASA’s photo of the day, tell me when the ISS is overhead – things like that) so I knew this would be an easier way to do interesting things with the Monzo API – without all that tedious programming.
An IFTTT applet has two parts. There’s a “trigger” (something that tells the applet to run) and an “action” (what you want it to do). Monzo offers both triggers and actions. The triggers are mostly fired when you make a purchase with your card (optionally filtered on things like the merchant or the amount). The actions are moving money into or out of a pot (a pot in a Monzo account is a named, ring-fenced area in your account where you can put money that you want to set aside for a particular purpose).
You can use a Monzo trigger and action together (when I buy something at McDonald’s, move £5 to my “Sin Bin” pot) but more interesting things happen if you combine them with triggers and actions from other providers (move £5 into my “Treats” pot when I do a 5K run – there are dozens of providers).
I needed an example to try it out. I decided to make a Twitter Swear Box. The idea is simple. If I tweet a bad word, I move £1 from my main account into my Swear Box pot.
The action part is simple enough. Monzo provides an action to move money out of a pot. You just need to give it the name of the pot and the amount to move.
The trigger part is a little harder. Twitter provides a trigger that fires whenever I tweet, but that doesn’t let me filter it to look for rude words. But there’s also a Twitter Search trigger which fires whenever a Twitter search finds a tweet which matches a particular search criterion. I used https://twitter.com/search-advanced to work out the search string to use and ended up with “fudge OR pish OR shirt from:davorg”. There’s a slight problem here – it doesn’t find other versions of the words like “fudging” or “shirty” – but this is good enough for a proof of concept.
Creating the applet is a simple as choosing the services you want to use, selecting the correct trigger and action and then filling in a few (usually pretty obvious) details. Within fifteen minutes I had it up and running. I sent a tweet containing the word “fudge” and seconds later there was a pound in my Swear Box pot.
Tonight, I was at a meeting at Monzo’s offices where they talked about how they developed the IFTTT integration and what directions it might go in the future. I asked for the latitude and longitude of a transaction to be included in the details that IFTTT gets – I have a plan to plot my transactions on a map.
Monzo is the first bank to release an integration with IFTTT and it really feels like we’re on the verge of something really useful here. I’ll be great to see where they take the service in the future.
It’s June, which means it’s only a couple of months until the Europe Perl community descends en masse on Glasgow for this year’s Perl Conference (formerly known as YAPC). For me, that also means I need to start planning the training courses I’ll be running before the conference. And for you, it means you need to start deciding which training courses you want to come to before the conference
This year, it looks like there will be one day of training courses on the day before the main conference starts (that’s Tuesday 14th August). There are a number of courses being offered – details are in a recent conference newsletter.
I’ll be giving two half-day courses and, unusually, there will be little or no Perl content in either of them. Here are the details:
Many of us have web sites and for most web sites, success is measured by the number of visitors you get. And, in most of the western world, getting your web site to rank higher in Google’s search results is one powerful tool for bringing in more visitors.
In this half-day course, I’ll be introducing a number of simple tips that will make your site more attractive to Google which will, hopefully, improve your search ranking. If you make it easier for Google to understand the contents and structure of your site, then Google is more likely to want to send visitors to your site. (Other search engines are, of course, available but if you keep Google happy, you’ll be keeping them happy too.)
I ran a short version of this course at the London Perl Workshop last year. This version will be twice as long (and twice as detailed).
Some people seem surprised that being really good at programming isn’t the only skill you need in order to have a successful career in software development.
I’ve been working in this industry for thirty years and I like to think I’ve been pretty successful. In this half-day course, I’ll look at some of the other skills that you need in order to do well in this industry. We’ll look at a range of skills from more technical areas like source code control and devops, to softer areas like software development methodologies and just working well with others.
I ran a two-hour version of this course at a London Perl Workshop in the dim and distant past. This version will be updated and expanded.
Both courses will be taking place on the same day. I’m not sure where they will be held, but I’ll let you know as soon as I have that information. Each half-day session costs £75 and you can book places on the conference web site. Places on the courses will be limited, so I recommend booking as soon as possible.
Do these courses sound interesting? Please let me know your thoughts in the comments.
Some of you might remember the lightning talk I gave at the London Perl Workshop last year (it’s available on YouTube, I’ll wait if you want to watch it). In it, I said I planned to resurrect the Perl School brand, using it to publish Perl ebooks. One book, Perl Taster, was already available and I had plans to write and publish several more. Those plans are still ongoing…
Also in the talk, I asked if anyone else wanted to write a book for the series. I offered to help out with the hard parts of getting your text into the Amazon system (it’s actually nowhere near as hard as you might think). Three people approached me later to discuss the possibility of writing something, but only one followed through with something more concrete. That was John Davies, who has been a regular attendee at London Perl Mongers for a pretty long time. At the LPW, John had helped Martin Berends to run a training course on using Selenium with Perl. As part of that help, John had written some notes on the course which had been distributed to the attendees. John wondered if those notes would be worth publishing as a Perl School ebook. I said that it was certainly worth pursuing the idea.
Over the last few months, John has expanded his original notes considerably and I’ve been doing the work needed to convert his input into an ebook. And I’m happy to say that the book was published on Amazon yesterday. It’s called Selenium and Perl and you should be able to find it on your local Kindle store. If you want to test your Perl web applications using Selenium, then I hope that you find it useful.
It’s been the first time I’ve edited someone else’s work and converted it into an ebook. I think the process has gone well (perhaps someone should ask John for his opinion!)
But I’m confident enough of the process to renew the offer I made at the LPW. If you’re interested in writing an ebook as part of the Perl School range, please get in touch and we can discuss it.
Yesterday I was at my second Brighton SEO conference. I enjoyed it every bit as much as the last one and I’m already looking forward to the next. Here are my notes about the talks I saw.
I misread the description for this. I thought it would be about clever ways to use command-line tools for SEO purposes. But, actually, it was a basic introduction to Unix command-line text processing tools for people who were previously unaware of them. I wasn’t really the target audience, but it’s always good to see a largely non-technical audience being introduced to the powerful tools that I use ever day.
A good introduction to why HTTP/2 is good news for web traffic (it’s faster) and a great trucking analogy explaining what HTTP is and how HTTP/2 improves on current systems. I would have liked more technical detail, but I realise most of the audience wouldn’t.
To be honest, I was only here because it was the last talk in the session and I didn’t have time to move elsewhere. I have never worked on a site with pages that are translated into other languages, so this was of limited interest to me. But Emily certainly seemed to know her stuff and I’m sure that people who use “hreflang” would have found it very interesting and useful.
One thing bothered me slightly about the talk. A couple of times, Emily referred to developers in slightly disparaging ways. And I realised that I’ve heard similar sentiments before at SEO events. It’s like developers are people that SEO analysts are constantly battling with to get their work done. As a developer myself (and one who has spend the last year implementing SEO fixes on one of the UK’s best-known sites) I don’t really understand this attitude – as it’s something I’ve never come across.
It’s annoyed me enough that I’m considering proposing a talk called “I Am Developer” to the next Brighton SEO in order to try to get to the bottom of this issue.
Fili is a former Google Search Quality Engineer, so he certainly knows his stuff. But this talk seemed a bit scattershot to me – it didn’t seem to have a particularly clear focus.
This was probably the talk I was looking forward to most. I’ve been dabbling in JSON-LD on a few sites recently and I’m keen to get deeper into to. Alexis didn’t disappoint – this was a great introduction to the subject and (unlike some other speakers) she wasn’t afraid to go deeper when it was justified.
Here first slide showed some JSON-LD and she asked us to spot the five errors in it. I’m disappointed to report that I only caught two of them.
This started well. A good crawling strategy is certainly important for auditing your site and ensuring that everything still works as expected. However, I was slightly put off by Sam’s insistence that a cloud-based crawling tool was an essential part of this strategy. Sam works for Deep Crawl who just happen to have a cloud-based crawling tool that they would love to sell you.
Conferences like this are at their best when the experts are sharing their knowledge with the audience without explicitly trying to sell their services. Sadly, this talk fell just on the wrong side of that line.
Then it was lunchtime and my colleagues and I retired just around the corner to eat far too much pizza that was supplied by the nice people at PI Datametrics.
This was really interesting. Rob says that featured snippets are on the rise and had some interesting statistics that will help you get your pages into a featured snippet. He then went on to explain how featured are forming the basis of Google’s Voice Search – that is, if you ask Google Home or Google Assistant a question, the reply is very likely to be the featured snippet that you’d get in response to the same query on the Google Search Engine. This makes it an ever better idea to aim at getting your content into featured snippets.
Sam Robson / Slides
Sam works for Future Publishing, on their Tech Radar site. He had some interesting war stories about dealing with Google algorithm changes and coming out the other side with a stronger site that is well-placed to capitalise on big technical keywords.
[I can’t find his slides online. I’ll update this post if I find them.]
This tied in really well with the other talks in the session. Jason has good ideas about how to get Google to trust your site more by convincing Google that you are the most credible source for information on the topics you cover. He also talked a lot about the machine learning that Google are currently using and where that might lead in the future.
I was at a bit of a loose end for the final session. Nothing really grabbed me. In the end I just stayed in the same room I’d been in for the previous session. I’m glad I did.
All too often, I’ve seen companies who don’t really know how to report effectively on how successfully (or otherwise!) their web sites are performing. And that’s usually because they don’t know what metrics are important or useful to them. Stephen had some good ideas about identifying the best metrics to track and ensuring that the right numbers are seen by the right people.
Having following Stephen’s advice and chosen the metrics that you need to track, Anna can show you how to record those metrics and how to also capture other useful information. As a good example, she mentioned a client who was an amusement park. Alongside the usual kinds of metrics, they had also been able to track the weather conditions at the time someone visited the site and had used that data to corroborate ticket sales with the weather.
Anna seemed to be a big fan of Google Tag Manager which I had previously dismissed. Perhaps I need to revisit that.
Dana DiTomaso / Slides
And once you have all of your data squirrelled away in Google Analytics, you need a good tool to turn it into compelling and useful reports. Dana showed us how we could to that with Google Data Studio – another tool I need to investigate in more detail.
[I can’t find her slides online. I’ll update this post if I find them.]
Two things struck me while watching the keynote conversation between John Mueller and Aleyda Solis. Firstly, I though that Aleyda was the wrong person to be running the session. I know that Brighton SEO tries hard not to be the usual stuffy, corporate type of conference, but I thought her over-familiar and jokey style didn’t go well in a conversation with Google’s John Mueller.
Secondly, I had a bit of an epiphany about the SEO industry. All day, I’d been watching people trying to explain how to get your site to do well in Google (other search engines are, of course, available but, honestly, who cares about them?) but they’re doing so without any real knowledge of how the Mighty God of Search really works.
Oh, sure, Google gives us tools like Google Analytics which allow us so see how well we’re doing and Google Search Console which will give us clues about ways we might be doing better. But, ultimately, this whole industry is trying to understand the inner working of a company that tells us next to nothing.
This was really obvious in the conversation with John Mueller. Pretty much every question was answered with a variation on “well, I don’t think we’d talk publicly about the details of that algorithm” or “this is controlled by a variety of factors that will change frequently, so I don’t think it’s useful to list them”.
The industry is largely stumbling about in the dark. We can apply the scientific method – we propose a hypothesis, run experiments, measure the results, adjust our hypothesis and repeat. Sometimes we might get close to a consensus on how something works. But then (and this is where SEO differs from real science) Google change their algorithms and everything we thought we knew has now changed.
Don’t get me wrong, it’s a fascinating process to watch. And, to a lesser extent, to be involved in. And there’s a lot riding on getting the results right. But in many ways, it’s all ultimately futile.
Wow, that got dark quickly! I should finish by saying that, despite what I wrote above, Brighton SEO is a great conference. If you want more people to visit your web site, you should be interested in SEO. And if you’re interested in SEO, you should be at Brighton SEO.
See you at the next one – it’s on September 28th.
There was a London Perl Mongers meeting at ZPG about ten days ago. I gave a short talk explaining why (and how) a republican like me came to be running a site about the Line of Succession to the British Throne. The meeting was great (as they always are) and I think my talk went well (I’m told the videos are imminent). The photo shows the final slide from my talk. I left it up so it was a backdrop for a number of announcements that various people gave just before the break.
In order to write my talk, I revisited the source code for my site and, in doing so, I realised that there were a couple of chunks of logic that I could (and should) carve out into separate distributions that I could put on CPAN. I’ve done that over the last couple of days and the modules are now available.
I’ve written the module as a Moo role, which means it should be usable in Moose classes too. To add JSON-LD to your class, you need to do three things:
Your class inherits two methods from the role – json_ld_data() returns the data structure which will be encoded into JSON (it’s provided in case you want to massage the data before encoding it) and json_ld() which returns the actual encoded JSON in a format that’s suitable for embedding in a web page.
One of the most satisfying parts of the Line of Succession site to write was the code that shows the relationship between a person in the line and the current sovereign. Prince Charles (currently first in line) is the son of the sovereign and Tāne Lewis (currently thirtieth in line) is the first cousin twice removed of the sovereign.
That code might be useful to other people, so it’s now on CPAN as Genealogy::Relationship. To be honest, I’m not sure exactly how useful it will be. The Line of Succession is a rather specialised version of a family tree – because we’re tracing a bloodline, we’re only interested in one parent (which is unlike normal genealogy where we’d be interested in both). It also might be too closely tied to the data model I use on my site – but I have plans to fix that soon.
Currently, because of the requirements of my site, it only goes as far as third cousins (that’s people who share the same great, great grandparents). That’s five generations. But I have an idea to build a site which shows the relationship between any two English or British monarchs back to 1066. I think that’s around forty generations – so I’ll need to expand the coverage somewhat!
But anyway, it’s there and I’d be happy if you tried it and let me know whether it works for you. The documentation should explain all you need to know.
The Line of Succession site doesn’t get much traffic yet – I really need to do more marketing for it. So it’s satisfying to know that some of the code might, at least, be useful outside of the project.