Last night's @CHVRCHES setlist - (very similar to when I saw them in Tufnell Park -


So today I've been playing with an old Kindle 3G. Clunkiest interface ever. How did we ever think they were cool?


But @Churches always have good support acts (well, with the exception of Lizzo!)


Very much enjoying Four Tet. Hadn't heard him before.


And after all that excitement, I've just realised I can't go to the March @CHVRCHES gig as I'll be on holiday :-/


Doctor Who Festival

In 2013, to celebrate the 50th anniversary of Doctor Who, the BBC put on a big celebration at the Excel centre in London’s Docklands. They must have thought that it went well as this year they decided to do it all over again at the Doctor Who Festival which took place last weekend. Being the biggest Doctor Who fan I know, I was at both events and I thought it might be interesting to compare them.

Each event ran over three days (Friday to Sunday). I visited both events on the Sunday on the basis that there would be one more episode of the show to talk about. This was particularly important in 2013 when the 50th anniversary special was broadcast on the Saturday night.


Let’s start with the basics. This years event was more expensive than the 2013 one. And the price increases were both large and seemingly random. Here’s a table comparing the prices.

Standard Tardis
Adult Child Family Adult Child Family
2013 £45.00 £20.00 £104.00 £95.50 £44.25 £218.00
2015 £68.00 £32.35 £171.00 £116.00 £52.75 £293.00
Increase 51.11% 61.75% 64.42% 21.47% 19.21% 34.40%

You’ll see that some prices “only” went up by about 20% while others increased by an eye-watering 65%. There’s obviously money to be made in these events. And, equally obviously, Doctor Who fans are happy to pay any price for entrance to these events. I don’t know about you, but those increases over two years where inflation has hovered around 0% scream “rip-off” to me.

You’ll notice that I’ve quoted prices for two different types of ticket. There are standard tickets and “Tardis” tickets. Tardis tickets give you certain extras. We’ll look at those next.

Tardis Tickets

I’ll admit here that I went for the Tardis ticket both times. The big advantage that this ticket gives you is that in the big panels (and we’ll see later how those panels are the main part of the days) the front eight or so tickets are reserved for Tardis ticket holders. So if you have a Tardis ticket you are guaranteed to be close enough to see the people on  the stage. Without a Tardis ticket you can be at the far end of the huge hall where you might be able to make out that some people are on the stage, but you’ll be relying on the big video screens to see what is going on.

To me, that’s the big advantage of the Tardis ticket. Does it justify paying almost double the standard ticket price? I’m not sure. But you get a couple of other advantages. You get a free goodie bag. In 2013, that contained a load of tat (postcards, stickers, a keyfob, stuff like that) that I ended up giving away. This year we got the show book (which was pretty interesting and very nearly worth the £10 they were charging for it) and a t-shirt (which was being sold on the day for £25). So the 2015 goodie bag was a massive improvement on the 2013 one.

Tardis ticket-holders also got access to a special lounge were you could relax and partake of free tea, coffee and biscuits. In 2013 this was in a private area away from the rest of the show. This year it was a cordoned off corner of the main exhibition hall which didn’t seem like quite so much of a haven of calm.

Main Panels

The main structure of the day is made up of three big discussion panels that are held in a huge room. Each panel is run twice during the day, but when you buy your ticket you know which time you’ll be seeing each panel.

Each panel has people who are deeply involved in the show. In 2013 we had the following panels:

This year we had:

Both sets of panels were equally interesting. Having the former Doctors taking apart in the 50th anniversary year made a lot of sense.

Exhibition Hall

The other main part of the event was an exhibition hall where various things were taking place. I think this was disappointing this year. Here are some comparisons:

Sets from the show

As far as I can remember, in 2013 there was only the entrance to Totter’s Yard and the outside of a Tardis. This year there was Davros’ hospital room, Clara’s living room and the outside of a Tardis (although this clearly wasn’t a “real” Tardis – the font on the door sign was terrible). So there were more sets this year, but I rather questioned their description of Clara’s living room as an “iconic” set.


There were a lot of opportunities to buy stuff, but it seemed to me that there were rather fewer stalls there this year. Merchandise seemed to fall into two categories. There was stuff that you would have been better off buying from Amazon (DVDs, board games, books, stuff like that). And there was really expensive stuff. I really can’t justify spending £60 or £80 for incredibly intricate replicas of props from the show or £200(!) for a copy of one of the Doctor’s coats.

There was one big exception to the “cheaper on Amazon” rule. The BBC shop had a load of classic DVDs on sale for £6 each.

In 2013 I bought a couple of postcards. This year I managed to resist buying anything. But I appeared to be rather unusual in that – there were a lot of people carrying many large bags of stuff.

Other Stages

Both years, around the edge of the main hall there were areas where other talks and workshops were taking place. This years seemed slightly disappointing. For example, on one stage in 2013 I saw Dick Maggs giving an interesting talk about working with Delia Derbyshire to create the original theme tune. The equivalent area this year had a group of assistant directors giving a list of the people who work on set when an episode of the show is being made.

In 2013, the centre of this room was given over to an area where many cast members from the show’s history were available for autographs and photos. This year, that’s where Clara’s living room was set up. In fact the four cast members who were in the panel I mentioned above were the only cast members who were involved in this event at all. I realise that it makes more sense for there to be lots of cast members involved in the 50th anniversary celebrations, but surely there were some other current cast members who could have turned up and met their fans.

Also in this hall was an area where the Horror Channel (who are the current home of Classic Doctor Who in the UK) were showing old episodes. There was something similar in 2013, but (like the Tardis lounge) it was away from the main hall. Moving this and the Tardis lounge to the main hall made me think that they were struggling a bit to fill the space.

In Summary

This year’s event was clearly a lot more expensive than the one in 2013 and I think attendees got rather less for their money. All in all I think it was slightly disappointing.

The big panels are clearly the centrepiece of the event and they are well worth seeing. But I think you need a Tardis ticket in order to guarantee getting a decent view. Oh, yes you can get in the ninth row without a Tardis ticket, but you’d be competing with a lot of people for those seats. You’d spend the whole day queuing to stand a chance of getting near the front.

I don’t know what the BBC’s plans for this event are, but it’s clearly a good money-spinner for them and I’d be surprised if they didn’t do it again either next year or in 2017. And the fans don’t really seem to mind how much they pay to attend, so it’ll be interesting to see how the next one is priced.

I think that the big panels still make the event worth attending, but there’s really not much else that I’m interested in. So I’m undecided as to whether I’d bother going again in the future.

Were you are the event? What did you think of it? How much money did you spend in total?

The post Doctor Who Festival appeared first on Davblog.

books read

The Ancient Guide to Modern Life

The Ancient Guide to Modern Life
author: Natalie Haynes
name: David
average rating: 3.82
book published: 2010
rating: 0
read at:
date added: 2015/11/10
shelves: currently-reading

perl hacks

Training Courses – More Details

Last week I mentioned the public training courses that I’ll be running in London next February. A couple of people got in touch and asked if I had more details of the contents of the courses. That makes sense of course, I don’t expect people to pay £300 for a days training without knowing a bit about the syllabus.

So here are details of the first two courses (the Moose one and the DBIx::Class one). I hope to have details of the others available by next weekend.

Object Oriented Programming with Perl and Moose

Database Programming with Perl and DBIx::Class

If you have any further questions, please either ask them in the comments or email me (I’m dave at this domain).

And if I’ve sold you on the idea of these courses, the booking page is now open.

Send to Kindle

The post Training Courses – More Details appeared first on Perl Hacks.


Eighteen Classic Albums

A couple of months ago, I wrote a post about a process I had developed for producing ebooks. While dabbling in a few projects (none of which are anywhere near being finished) I established that the process worked and I was able to produce ebooks in various different formats.

But what I really needed was a complete book to try the process on, so that I could push it right through the pipeline so it was for sale on Amazon. I didn’t have the time to write a new book, so I looked around for some existing text that I could reuse.

Long-time readers might remember the record club that I was a member of back in 2012. It was a Facebook group where each week we would listen to a classic album and then discuss it with the rest of the group. I took it a little further and wrote up a blog post for each album. That sounded like a good set of posts to use for this project.

So I grabbed the posts, massaged them a bit, added a few other files and, hey presto, we have a book. All in all it took about two or three hours of work. And a lot of that was my amateur attempts at creating a cover image. If you’re interested in the technical stuff, then you can find all the input files on Github.

There has been some confusion over the title of the book. Originally, I thought there were seventeen reviews in the series. But that was because I had mis-tagged one. And, of course, you only find problems like that after you create the book and upload it to Amazon. So there are rare “first printing” versions available with only seventeen reviews and a different title. Currently the book page on Amazon is still showing the old cover. I hope that will be sorted out soon. I’ll be interesting to see how quickly the fixed version is pushed out to people who have already bought the older edition.

My process for creating ebooks is working well. And the next step of the process (uploading the book to Amazon) was pretty painless too. You just need to set up a Kindle Direct Publishing account and then upload a few files and fill in some details of the book. I’ve priced it at $2.99 (which is £1.99) as that’s the cheapest rate at which I can get 70% of the money. The only slight annoyance in the process is that once you’ve uploaded a book and given all the details, you can’t upload a new version or change any of the information (like fixing the obvious problems in the current description) until the current version has been published across all Amazon sites. And that takes hours. And, of course, as soon as you submit one version you notice something else that needs to be fixed. So you wait. And wait.

But I’m happy with the way it has all gone and I’ll certainly be producing more books in the future using this process.

Currently three people have bought copies. Why not join them. It only costs a couple of quid. And please leave a review.

The post Eighteen Classic Albums appeared first on Davblog.

perl hacks

Public Training in London – February 2016

For several years I’ve been running an annual set of public training courses in London in conjunction with FLOSS UK (formerly known as UKUUG). For various scheduling reasons, we didn’t get round to running any this year, but we have already made plans for next year.

I’ll be running five days of training in central London from 8th – 12th February. The courses will take place at the Ambassador’s Hotel on Upper Woburn Place. Full details are in the process of appearing on the FLOSS UK web site, but the booking page doesn’t seem to be live yet, so I can’t tell you how much it will cost.

We’re doing something a little different this year. In previous years, I’ve been running two generic two-day courses – one on intermediate Perl and one on advanced Perl. This year we’re running a number of shorter but more focussed courses. The complete list is:

This new approach came out of some feedback we’ve received from attendees over the last couple of years. I’m hoping that by offering this shorter courses, people will be able to take more of a “mix and match” approach and will select courses that better fit their requirements. Of course, if you’re interested, there’s no reason why you shouldn’t come to all five days.

I’ll update this page when I know how much the courses will cost and how you can book. But please put these dates in your calendar.

Update: And less than 24 hours after publishing this blog post, the booking page has gone live.

Places are £300 a day (so £600 for the two-day course on web programming) and there’s a special offer of £1,320 for the full week.

Prices are cheaper (by £90 a day) for members. And given that an annual individual membership costs £35, that all sounds like a bit of a no-brainer to me.

Send to Kindle

The post Public Training in London – February 2016 appeared first on Perl Hacks.

perl hacks

Build RPMs of CPAN Modules

If you’ve been reading my blog for a while, you probably already know that I have an interest in building RPMs of CPAN modules. I run a small RPM repository where I make available all of the RPMs that I have built for myself. These will either be modules that aren’t available in other RPM repositories or modules where I wanted a newer version than the currently available one.

I’m happy to take requests for my repo, but I don’t often get any. That’s probably because most people very sensibly use the cpanminus/local::lib approach or something along those lines.

But earlier this week, I was sitting on IRC and Ilmari asked if I had a particular module available. When I said that I didn’t, he asked if I had a guide for building an RPM. I didn’t (well there are slides from YAPC 2008 – but they’re a bit dated) but I could see that it was a good suggestion. So here it is. Oh, and I built the missing RPM for him.

Setting Up

In order to build RPMs you’ll need a few things set up. This is stuff you’ll only need to do once. Firstly, you’ll need two new packages installed – cpanspec (which parses a CPAN distribution and produces a spec file) and rpm-build (which takes a spec file and a distribution and turns them into an RPM). They will be available in the standard repos for your distribution (assuming your distribution is something RPM-based like Fedora or Centos) so installing them is as simple as:

sudo yum install cpanspec rpm-build

If you’re using Fedora 22 or later, “yum” has been replaced with “dnf”.

Next, you’ll need a directory structure in which to build your RPMs. I always have an “rpm” directory in my home directory, but it can be anywhere and called anything you like. Within that directory you will need subdirectories called BUILD, BUILDROOT, RPMS, SOURCES, SPECS and SRPMS. We’ll see what most of those are for a little later.

The final thing you’ll need is a file called “.rpmmacros” in your home directory. At a minimum, it should contain this:

%packager Your Name <>
%vendor Some Organisation
%_topdir /home/you/rpm

The packager and vendor settings are just to stop you having to type in that information every time you build an RPM. The _topdir setting points to the “rpm” directory that you created a couple of paragraphs up.

I would highly recommend adding the following line as well:

%__perl_requires %{nil}

This turns off the default behavior for adding “Requires” data to the RPM. The standard behaviour is to parse the module’s source code looking for every “use” statement. By turning that off, you instead trust the information in the META.yml to be correct. If you’re interesting in hearing more detail about why I think the default behaviour is broken, then ask me in a pub sometime.

Ok. Now we’re all set. We can build our first RPM.

Building an RPM

Building an RPM is simple. You use “cpanspec” to make the spec file and then “rpmbuild” to build the RPM. You can use “cpanspec” in a few different modes. If you have the module tarball, then you can pass that to “cpanspec”.

cpanspec Some-Module-0.01.tar.gz

That will unwrap the tarball, parse the code and create the spec file.

But if you’re building an RPM for a CPAN module, you don’t need to download the tarball first, “cpanspec” will do that for you if you give it a distribution name.

cpanspec Some-Module

That will connect to CPAN, find the latest version of the distribution, download the right tarball and then do all the unwrapping, parsing and spec creation.

But there’s another, even cleverer way to use “cpanspec” and that’s the one that I use. If you only know the module’s name and you’re not sure which distribution it’s in, then you can just pass the name of the module.

cpanspec Some::Module

This is the mode that I always use it in.

No matter how you invoke “cpanspec”, you will end up with the distribution tarball and the spec file – which will be called “perl-Some-Module.spec”. You need to copy these files into the correct directories under your rpm building directory. The tarball goes into SOURCES and the spec goes into SPECS. It’s also probably easiest if you change directory into your rpm building directory.

You can now build the RPM with this command:

rpmbuild -ba SPECS/perl-Some-Module.spec

You’ll see a lot of output as “rpmbuild” goes through the whole CPAN module testing and building process. But hopefully eventually you’ll see some output saying that the build has succeeded and that an RPM has been written under your RPMS directory (in either the “noarch” or “x86_64” subdirectory). You can install that RPM with any of the following commands:

sudo rpm -Ivh <path-to-rpm>
sudo yum localinstall <path-to-rpm>
sudo dnf install <path-to-rpm>

And that should be that. Of course there are a few things that can go wrong. And that’s what the next section is about.

Fixing Problems

There are a number of things that can go wrong when building RPMs. Here are some of the most common, along with suggested fixes.

Missing prerequisites

This is also known as “dependency hell”. The module you are building is likely to require other modules. And you will need to have those installed before “rpmbuild” will let you build the RPM (and, note, they’ll need to be installed as RPMS – the RPM database doesn’t know about modules you have installed with “cpan” or “cpanminus”).

If you have missing prerequisites, the first step is to try to install them using “yum” (or “dnf”). Sometimes you will get lucky, other times the prerequisites won’t exist in the repos that you’re using and you will have to build them yourself. This is the point at which building an RPM for a single module suddenly spirals into three hours of painstaking work as you struggle to keep track of how far down the rabbit-hole you have gone.

I keep thinking that I should build a tool which parses the prerequisites, works out which ones already exist and automatically tries to build the ones that are missing. It would need to work recursively of course. I haven’t summoned the courage yet.

Extra files

Sometimes at the end of an RPM build, you’ll get an error saying that files were found which weren’t listed in the spec file. This usually means that the distribution contains programs that “cpanspec” didn’t find and therefore didn’t add to the spec file. This is a simple fix. Open the spec file in an editor and look for the section labelled ‘%files’. Usually, it will look something like this:

%doc AUTHORS Changes LICENSE META.json

This is a list of the files which will be added to the RPM. See the _mandir entry? That’s the man page for the module that is generated from the module’s Pod (section 3 is where library documentation goes). We just need to add two lines to the bottom of this section:


This says “add any files you find in the binaries directories (and also any man pages you find for those programs)”.

If you add these lines and re-run the “rpmbuild” command, the build should now succeed.

Missing header files

If you’re building an XS module that is a wrapped around a C library then you will also need the C header files for that library in order to compile the XS files. If you get errors about missing definitions, then this is probably the problem. In RedHat-land a C library called “mycoolthing” will live in an RPM called “libmycoolthing” and the headers will be in an RPM library called “libmycoolthing-devel”. You will need both of those installed.

Your users, however, will only need the C library (libmycoolthing) installed. It’s well worth telling the RPM system that this external library is required by adding the following line to the spec file:

Requires: libmycoolthing

That way, when people install your module using “yum” or “dnf”, it will pull in the correct C library too. “cpanspec” will automatically generate “Requires” lines for other Perl RPMs, but it can’t do it for libraries that aren’t declared in the META.yml file.


So that’s it. A basic guide to building RPMs from CPAN distributions. There’s a lot more detail that I could cover, but this should be enough to work for 80-90% of the modules that you will want to build.

If you have any questions, then please leave a comment below.

Send to Kindle

The post Build RPMs of CPAN Modules appeared first on Perl Hacks.


How To Travel From London To Paris

Imagine that you want to travel from London to Paris. Ok, so that’s probably not too hard to imagine. But also imagine that you have absolutely no idea how to do that and neither does anyone that you know. In that situation you would probably go to Amazon and look for a book on the subject.

Very quickly you find one called “Teach Yourself How To Travel From London To Paris In Twenty-One Days”. You look at the reviews and are impressed.

I had no idea how to get from London to Paris, but my family and I followed the instructions in this book. I’m writing this from the top of the Eiffel Tower – five stars.


I really thought it would be impossible to get from London to Paris, but this book really breaks it down and explains how it’s done – five stars.

There are plenty more along the same lines.

That all looks promising, so you buy the book. Seconds later, it appears on your Kindle and you start to read.

Section one is about getting from London to Dover. Chapter one starts by ensuring that all readers are starting from the same place in London and suggests a particular tavern in Southwark where you might meet other travellers with the same destination. Chapter two suggests a walking route that you might follow from Southwark to Canterbury. It’s written in slightly old-fashioned English and details of the second half of the route are rather sketchy.

Chapter two contains a route to walk from Canterbury to Dover. The language has reverted to modern English and the information is very detailed. There are reviews of many places to stay on the way – many of which mention something called “Trip Advisor”.

Section two is about crossing the channel. Chapter three talks about the best places in Dover to find the materials you are going to need to make your boat and chapter four contains detailed instructions on how to construct a simple but seaworthy vessel. The end of the chapter has lots of advice on how to judge the best weather conditions for the crossing. Chapter five is a beginner’s guide to navigating the English Channel and chapter six has a list of things that might go wrong and how to deal with them.

Section three is about the journey from Calais to Paris. Once again there is a suggested walking route and plenty of recommendations of places to stay.

If you follow the instructions in the book you will, eventually, get to Paris. But you’re very likely to come away thinking that it was all rather more effort than you expected it to be and that next time you’ll choose a destination that it easier to get to.

You realise that you have misunderstood the title of the book. You thought it would take twenty-one days to learn how to make the journey, when actually it will take twenty-one days (at least!) to complete the journey. Surely there is a better way?

And, of course, there is. Reading further in the book’s many reviews you come across the only one-star review:

If you follow the instructions in this book you will waste far too much time. Take your passport to St. Pancras and buy a ticket for the Eurostar. You can be in Paris in less than four hours.

The reviewer claims to be the travel correspondent for BBC Radio Kent. The other reviewers were all people with no knowledge of travel who just happened to come across the book in the same way that you did. Who are you going to trust?

I exaggerate, of course, for comic effect. But reviews of technical books on Amazon are a lot like this. You can’t trust them because in most cases the reviewers are the very people who are least likely to be able to give an accurate assessment of the technical material in the book.

When you are choosing a technical book you are looking for two things:

Most people pick up a technical book because they want to learn about the subject that it covers. That means that, by definition, they are unable to judge that second point. They know how easily they understood the material in the book. They also know whether or not they managed to use that information to achieve their goals. But, as my overstretched metaphor above hopefully shows, it’s quite possible to follow terrible advice and still achieve your goals.

I first came aware of this phenomena in the late 1990s. At the time a large amount of dynamic web pages were built using Perl and CGI. This meant that a lot of publishers saw this as a very lucrative market and dozens of books on the subject were published many of which covered the Perl equivalent of walking from London to Paris. And because people read these books and managed to get to Paris (albeit in a ridiculously roundabout manner) they thought the books were great and gave them five-star reviews. Much to the chagrin of Perl experts who were standing on the kerbside on the A2 shouting “but there’s a far easier way to do that!”

This is still a problem today. Earlier this year I reviewed a book about penetration testing using Perl. I have to assume that the author knew what he was doing when talking about pen testing, but his Perl code was positively Chaucerian.

It’s not just book reviews that are affected. Any kind of technical knowledge transfer mechanism is open to the same problems. A couple of months ago I wrote a Perl tutorial for Udemy. It only covered the very basics, so they included a link to one of their other Perl courses. But having sat through the first few lessons of this course, I know that it’s really not very good. How did the people at Udemy choose which one to link to? Well it’s the one with the highest student satisfaction ratings, of course. It teaches the Perl equivalent of boat-building. A friend has a much better Perl course on Udemy, but they wouldn’t use that as it didn’t have enough positive feedback.

Can we blame anyone for this? Well, we certainly can’t blame the reviewers. They don’t know that they are giving good reviews to bad material. I’m not even sure that we can blame the authors in many cases. It’s very likely that they don’t know how much they don’t know (obligatory link to the Dunning–Kruger effect). I think that in some cases the authors must know that they are chancing their arm by putting themselves forward as an expert, but most of them probably believe that they are giving good advice (because they learned from an expert who taught them how to walk from London to Paris and so the chain goes back to the dawn of time).

I think a lot of the blame must be placed with the publishers. They need to take more responsibility for the material they publish. If you’re publishing in a technical arena then you need to build up contacts in that technical community so that you have people you can trust who can give opinions on your books. If you’re publishing a book on travelling from London to Paris then see if you can find a travel correspondent to verify the information in it before you publish it and embarrass yourselves. In fact, get these experts involved in the process of commissioning process. If you what to publish a travel book then ask your travel correspondent friends if they know anyone who could write it. If someone approaches you with a proposal for a travel book then run the idea past a travel correspondent or two before signing the contract.

I know that identifying genuine experts in a field can be hard. And I know that genuine experts would probably like to be compensated for any time they spend helping you, but I think it’s time and money well-spent. You will end up with better books.

Or, perhaps some publishers don’t care about the quality of their books. If bad books can be published quickly and cheaply and people still buy them, then what business sense does it make to make the books better.

If you take any advice away from this piece, then don’t trust reviews and ratings of technical material.

And never try to walk from London to Paris (unless it’s for charity).

The post How To Travel From London To Paris appeared first on Davblog.

perl hacks

The Joy of Prefetch

If you heard me speak at YAPC or you’ve had any kind of conversation with me over the last few weeks then it’s likely you’ve heard me mention the secret project that I’ve been writing for my wife’s school.

To give you a bit of background, there’s one afternoon a week where the students at the school don’t follow the normal academic timetable. On that afternoon, the teachers all offer classes on wider topics. This year’s topics include Acting, Money Management and Quilt-Making. It’s a wide-ranging selection. Each student chooses one class per term.

This year I offered to write a web app that allowed the students to make their selections. This seemed better than the spreadsheet-based mechanisms that have been used in the past. Each student registers with their school-based email address and then on a given date, they can log in and make their selections.

I wrote the app in Dancer2 (my web framework of choice) and the site started allowing students to make their selections last Thursday morning. In the run-up to the go-live time, Google Analytics showed me that about 180 students were on the site waiting to make their selections. At 7am the selections part of the site went live.

And immediately stopped working. Much to my embarrassment.

It turned out that a disk failed on the server moments after the site went live. It’s the kind of thing that you can’t predict.But it leads to lots of frustrated teenagers and doesn’t give a very good impression.

To give me time to rebuild and stress-test the site we’ve decided to relaunch at 8pm this evening. I’ve spent the weekend rebuilding the app on a new (and more powerful) server.

I’m pretty sure that the timing of the failure was coincidental. I don’t think that my app caused the disk failure. But a failure of this magnitude makes you paranoid, so I spent a lot of yesterday tuning the code.

The area I looked at most closely was the number of database queries that the app was making. There are two main actions that might be slow – the page that builds the list of courses that a student can choose from and the page which saves a student’s selections.

I started with the first of these. I set DBIC_TRACE to 1 and fired up a development copy of the app. I was shocked to see the app run about 120 queries – many of which were identical.

Of course I should have tested this before. And, yes, it’s an idiotic way to build an application. But I’m afraid that using an ORM like DBIx::Class can make it all too easy to write code like this. Fortunately, it makes it easy to fix it too. The secret is “prefetch”.

“Prefetch” is an option you can pass to the the “search” method on a resultset. Here’s an example of the difference that can make.

There are seven year groups in a British secondary school. Most schools call them Year 7 to Year 13 (the earlier years are in primary school). Each year group will have a number of forms. So there’s a one to many relationship between years and forms. In database terms, the form table holds a foreign key to the year table. In DBIC terms, the Year result class has a “has_many” relationship with the Form result class and the Form result class has a “belongs_to” relation with the Year result class.

A naive way to list the years and their associated forms would look like this:

foreach my $year ($schema->resultset('Year')->all) {
  say $year->name;
  foreach my $form ($year->forms->all) {
    say '* ', $form->name;

Run code like that with DBIC_TRACE turned on and you’ll see the proliferation of database queries. There’s one query that selects all of the years and then for each year, you get another query to get all of its associated forms.

Of course, if you were writing raw SQL, you wouldn’t do that. You’d write one query that joins the year and form tables and pulls all of the data back at once. And the “prefetch” option gives you a way to do that in DBIC as well.

foreach my $year ($schema->resultset('Year')->search({}, {
  prefetch => 'forms',
})->all) {
  say $year->name;
  foreach my $form ($year->forms->all) {
    say '* ', $form->name;

All we have done here is to interpose a call to “search” which adds the “prefetch” option. If you run this code with DBIC_TRACE turned on, then you’ll see that there’s only one database query and it’ll be very similar to the raw SQL that you would have written – it brings back the data from both of the tables at the same time.

But that’s not all of the cleverness of the “prefetch” option. You might be wondering what the difference is between “prefetch” and the rather similar-sounding “join” option. Well, with “join” the columns from the joined table would be added to your main table’s result set. This would, for example, create some kind of mutant Year resultset object that you could ask for Form data using calls like “get_column(‘’)”. [Update: I was trying to simplify this explanation and I ended up over-simplifying to the point of complete inaccuracy – joined columns only get added to your result set if you use the “columns” or “select/as” attributes. And the argument to “get_column()” needs to be the column name that you have defined using those options.] And that’s useful sometimes, but often I find it easier to use “prefetch” as that uses the data from the form table to build Form result objects which look exactly as they would if you pulled them directly from the database.

So that’s the kind of change that I made in my code. By prefetching a lot of associated tables I was able to drastically cut down the number of queries made to build that course selection page. Originally, it was about 120 queries. I got it down to three. Of course, each of those queries is a lot larger and is doing far more work. But there’s a lot less time spent compiling SQL and pulling data from the database.

The other page I looked at – the one that saves a student’s selections – wasn’t quite so impressive. Originally it was about twenty queries and I got it down to six.

Reducing the number of database queries is a really useful way to make your applications more efficient and DBIC’s “prefetch” option is a great tool for enabling that. I recommend that you take a close look at it.

After crowing about my success on Twitter I got a reply from a colleague pointing me at Test::DBIC::ExpectedQueries which looks like a great tool for monitoring the number of queries in your app.

Send to Kindle

The post The Joy of Prefetch appeared first on Perl Hacks.

perl hacks

YAPC Europe 2015: A Community is a Home

I’m in Granada, Spain for the 2015 “Yet Another Perl Conference” (YAPC). The three-day conference finished about an hour and a half ago and, rather than going to a bar with dozens of other attendees, I thought I would try to get my impressions down while it’s all still fresh in my mind.

YAPC is a grass-roots conference. It’s specifically planned so that it will be relatively cheap for attendees. This year I think the cost for an attendee was 100 EUR (I’m not sure as I was a speaker and therefore didn’t need to buy a ticket). That’s impressively low cost for such an impressive conference. Each year since 2000 (when the first European YAPC took place in London) 250 to 300 Perl programmers gather for their annual conference in a different European city.

Day 0

Although the conference started on Wednesday, there were a few tutorials over the two days before that. On Tuesday I ran a one-day course on DBIx::Class, Perl’s de facto standard ORM. There were slightly fewer students than I would have liked, but they were an enthusiastic and engaged group.

The night before the conference was the traditional pre-conference meet-up. People generally arrive during the day before the conference starts and the local organisers designate a bar for us all to meet in. This year, Eligo (a recruitment company with a strong interest in placing Perl programmers) had arranged to buy pizza and beer for all of the attendees at the conference venue and we spent a pleasant evening catching up with old friends.

I should point out that I’m only going to talk about talks that I saw. There were four tracks at the conference which meant that most of the time I was having to make difficult choices about which talk to see. Other people blogging about the conference will, no doubt, have a different set of talks to discuss.

Day 1

The conference had a keynote at the start and end of each day. They all sounded interesting, but I was particularly interested in hearing Tara Andrews who opened the first day. Tara works in digital humanities. In particular, she uses Perl programs which track differences between copies of obscure medieval manuscripts. It’s a million miles from what you usually expect Perl programmers to be doing and nicely illustrates the breadth of Perl’s usage.

I saw many other interesting talks during the day. The one that stood out for me was Jose Luis Martinez talking about Paws. Paws wants to be the “official” Perl SDK for all of Amazons Web Services. If you know how many different services AWS provides, then you’ll realise that this is an impressive goal – but it sounds like they’re very nearly there.

Lunch was run on an interesting model. Granada is apparently the only remaining place in Spain where you still get served tapas whenever you order a drink in a bar. So when you registered for the conference, you were given some tokens that could be exchanged for a drink and tapas at ten local bars. It was a great way to experience a Granada tradition and it neatly avoided the huge queues that you often get with more traditional conference catering.

At the end of the day, everyone is back in the largest room for the lightning talks. These talks are only five minutes long – which makes them a good way for new speakers to try public speaking without having to commit for a longer talk. They are also often used by more experienced speakers to let their hair down a bit and do something not entirely serious. This session was the usual mixture of talks, which included me giving a talk gently ribbing people who don’t keep their Perl programming knowledge up to date.

The final session of the day was another keynote. Curtis Poe talked about turning points in the story of Perl and the Perl community. Two points that he made really struck home to me (both coming out the venerable age of Perl) – firstly Perl is language that is “Battle-Tested” and that isn’t going anywhere soon; and secondly the Perl community has really matured over the last few years and is now a big part of Perl’s attraction. This last point was apparently reiterated in a recent Gartner report on the relative merits of various programming languages.

Wednesday evening saw an excuse for more socialising with the official conference dinner. This was a buffet affair with around the swimming pool of a swanky Granada hotel. Conference attendees paid nothing for this event and the food and drink was still flowing freely when I slunk off back to my hotel room.

Day 2

Thursday morning started with another Perl community tradition – the “State of the Velocirapter” talk. This is an annual talk that focusses on the Perl 5 community and its achievements (in comparison with Larry Wall’s “State of the Onion” talk which generally concentrates on the Perl 6 project). This year, Matt Trout has handed over responsibility for this talk to Sawyer, who was in a more reflective mood than Matt has often been. Like Curtis, the previous evening, Sawyer has noticed how the Perl community has matured and has reached the conclusion that many of us love coming to YAPC because the community feels like our home.

Next up was Jessica Rose talking about The Cult of Expertise. This was less a talk and more a guided discussion about how people become recognised as experts and whether that designation is useful or harmful in the tech industry. It was a wide-ranging discussion, covering things like imposter syndrome and the Dunning-Kruger effect. It was rather a departure for such a technical conference and I think it was a very successful experiment.

The next talk was very interesting too. As I said above, the European YAPC has 250 to 300 attendees each year. But in Japan, they run a similar conference which, this year, had over 2,000 attendees. Daisuke Maki talked about how he organised a conference of that size. A lot of what he said could be very useful for future conference organisers.

After lunch was the one session where I had no choice. I gave my talk on “Conference Driven Publishing” during the second slot. It wasn’t at all technical but I think I got some people interested in my ideas of people writing their own Perl books and publishing them as ebooks.

At the end of the day, we had another excellent session of lightning talks and another keynote – this time from Xavier Noria, a former member of the Perl community who switched to writing Ruby several years ago. He therefore had an interesting perspective on the Perl community and was happy to tell us about some of Perl’s features that fundamentally shaped how he thought about software.

There was still one more session that took us well into the evening. There is a worry that we aren’t getting many new young programmers into the Perl community, so Andrew Solomon of GeekUni organised a panel discussion on how to grow the community. A lot of ideas where shared, but I’m not sure that any concrete plans came out of it.

Day 3

And so to the final day. The conference started early with a keynote by Stevan Little. The theme of the conference was “Art and Engineering” and Stevan studied art at college rather than computer science, so he talked about art history and artistic techniques and drew some interesting comparisons with the work of software development. In the end he concluded that code wasn’t art. I’m not sure that I agree.

I then saw talks on many different topics – and example of a simple automation program written in Perl 6, a beginners guide to who’s who and what’s what in the Perl community, an introduction to running Perl on Android, a couple of talks on different aspects of running Perl training courses, one on the Perl recruitment market and one on a simple git-driven tool for checking that you haven’t made a library far slower when you add features. All in all, a pretty standard selection of topics for a day at YAPC.

The final keynote was from Larry Wall, the man who created Perl in 1987 and who has been steering the Perl 6 project for the last fifteen years. This was likely to include some big news. At FOSDEM in February, Larry announced his intention to release a beta test version of Perl 6 on his birthday (27 September) and version 1.0 (well, 6.0, I suppose) by Christmas. There were some caveats as there were three major pieces of work that were still needed.

Larry’s talk compared Perl 5 and Perl 6 with The Hobbit and The Lord of the Rings respectively – apparently Tolkien also spent 15 years working on The Lord of the Rings – but finished by announcing that the work on the three blockers was all pretty much finished so it sounds like we really can expect Perl 6 by Christmas. That will be a cause for much celebration in the Perl community.

After Larry, there was a final session of lightning talks (including a really funny one that was a reaction to my lightning talk on the first day) and then it only remained to give all of the organisers and helpers a standing ovation to thank them for another fabulous YAPC.

Next year’s conference will be in Cluj-Napoca. I’m already looking forward to it. Why not join us there?

Send to Kindle

The post YAPC Europe 2015: A Community is a Home appeared first on Perl Hacks.


Conference Driven Publishing

A talk I gave at the London Perl Mongers Technical Meeting on 13th August 2015

Writing Books (The Easy Bit)

Last night I spoke at a London Perl Mongers meeting. As part of the talk I spoke about a toolchain that I have been using for creating ebooks. In this article I’ll go into a little more detail about the process.

Basically, we’re talking about a process that takes one or more files in some input format and (as easily as possible) turns them into one or more output formats which can be described as “ebooks”. So before we can decided which tools we need, we should decide what those various file formats should be.

For my input format I chose Markdown. This is a text-based format that has become popular amongst geeks over the last few years. Geeks tend to like text-based formats more than the proprietary binary formats like those produced by word processors. This is for a number of reasons. You can read them without any specialised tools. You’re not tied down to using specific tools to create them. And it’s generally easier to store them in a revision management system like Github.

For my output formats, I wanted EPUB and Mobipocket. EPUB is the generally accepted standard for ebooks and Mobipocket is the ebook format that Amazon use. And I also wanted to produce PDFs, just because they are easy to read on just about any platform.

(As an aside, you’ll notice that I said nothing in that previous paragraph about DRM. That’s simply because nice people don’t do that.)

Ok, so we know what file formats we’ll be working with. Now we need to know a) how we create the input format and b) how we convert between the various formats. Creating the Markdown files is easy enough. It’s just a text file, so any text editor would do the job (it would be interesting to find out if any word processor can be made to save text as Markdown).

To convert our Markdown into EPUB, we’ll need a new tool. Pandoc describes itself as “a universal document converter”. It’s not quite universal (otherwise that would be the only tool that we would need), but it is certainly great for this job. Once you have installed Pandoc, the conversion is simple:

pandoc -o your_book.epub title.txt --epub-metadata=metadata.xml --toc --toc-depth=2

There are two extra files you need here (I’m not sure why it can’t all be in the same file, but that’s just the way it seems to be). The first (which I’ve called “title.txt”), contains two lines. The first line has the title of your book and the second has the author’s name. Each line needs to start with a “%” character. So it might look like this:

% Your title
% Your name

The second file (which I’ve called “metadata.xml”) contains various pieces of information about the book. It’s (ew!) XML and looks like this:

<metadata xmlns:dc="">
<dc:title id="main">Your Title</dc:title>
<meta refines="#main" property="title-type">main</meta>
<dc:creator opf:file-as="Surname, Forename" opf:role="aut">Forename Surname</dc:creator>
<dc:publisher>Your name</dc:publisher>
<dc:date opf:event="publication">2015-08-14</dc:date>
<dc:rights>Copyright ©2015 by Your Name</dc:rights> </metadata>

So after creating those files and running that command, you’ll have an EPUB file. Next we want to convert that to a Mobipocket file so that we can distribute our book through Amazon. Unsurprisingly, the easiest way to do that is to use a piece of software that you get from Amazon. It’s called Kindlegen and you can download it from their site. Once it is installed, the conversion is as simple as:

kindlegen perlwebbook.epub

This will leave you with a file called “” which you can upload to Amazon.

There’s one last conversion that you might need. And that’s converting the EPUB to PDF. Pandoc will make that conversion for you. But it does it using a piece of software called LaTeX which I’ve never had much luck with. So I looked for an alternative solution and found it in Calibre. Calibre is mainly an ebook management tool, but it also converts between many ebook formats. It’s pretty famous for having a really complex user interface but, luckily for us, there’s a command line program called “ebook-convert” – which we can use.

ebook-convert perlwebbook.epub perlwebbook.pdf

And that’s it. We start with a Markdown file and end up with an ebook in three formats. Easy.

Of course, that really is the easy part. There’s a bit that comes before (actually writing the book) and a bit that comes after (marketing the book) and they are both far harder. Last year I read a book called Author, Publisher, Entrepreneur which covered these three steps to a very useful level of detail. Their step two is rather different to mind (they use Microsoft Word if I recall correctly) but what they had to say about the other steps was very interesting. You might find it interesting if you’re thinking of writing (and self-publishing) a book.

I love the way that ebooks have democratised the publishing industry. Anyone can write and publish a book and make it available to everyone through the world’s largest book distribution web site.

So what are you waiting for? Get writing. If you find my toolchain interesting (or if you have any comments on it) then please let me know.

And let me know what you’ve written.

The post Writing Books (The Easy Bit) appeared first on Davblog.


Financial Account Aggregation

Three years ago, I wrote a blog post entitled Internet Security Rule One about the stupidity of sharing your passwords with anyone. I finished that post with a joke.

Look, I’ll tell you what. I’ve got a really good idea for an add-on for your online banking service. Just leave the login details in a comment below and I’ll set it up for you.

It was a joke because it was obviously ridiculous. No-one would possibly think it was a good idea to share their banking password with anyone else.

I should know not to make assumptions like that.

Yesterday I was made aware of a service called Money Dashboard. Money Dashboard aggregates all of your financial accounts so that you can see them all in one convenient place. They can then generate all sorts of interesting reports about where your money is going and can probably make intelligent suggestions about things you can do to improve your financial situation. It sounds like a great product. I’d love to have access to a system like that.

There’s one major flaw though.

In order to collect the information they need from all of your financial accounts, they need your login details for the various sites that you use. And that’s a violation of the Internet Security Rule One. You should never give your passwords to anyone else – particularly not passwords that are as important as your banking password.

I would have thought that was obvious. But they have 100,000 happy users.

Of course they have have a page on their site telling you exactly how securely they store your details. They use “industry-standard security practices”, their application is read-only “which means it cannot be used for withdrawals, payments or to transfer your funds”. They have “selected partners with outstanding reputations and extensive experience in security solutions”. It all sounds lovely. But it really doesn’t mean very much.

It doesn’t mean very much because at the heart of their system, they need to log on to your bank’s web site pretending to be you in order to get hold of your account information. And that means that no matter how securely they store your passwords, at some point they need to be able to retrieve them in plain text so they can use them to log on to your banks web site. So there must be code somewhere in their system which punches through all of that security and gets the string “pa$$word”. So in the worst case scenario, if someone compromises their servers they will be able to get access to your passwords.

If that doesn’t convince you, then here’s a simpler reason for not using the service. Sharing your passwords with anyone else is almost certainly a violation of your bank’s terms and conditions. So if someone does get your details from Money Dashboard’s system and uses that information to wreak havoc in your bank account – good luck getting any compensation.

Here, for example, are First Direct’s T&Cs about this (in section 9.1):

You must take all reasonable precautions to keep safe and prevent fraudulent use of any cards, security devices, security details (including PINs, security numbers, passwords or other details including those which allow you to use Internet Banking and Telephone Banking).

These precautions include but are not limited to all of the following, as applicable:


  • not allowing anyone else to have or use your card or PIN or any of our security devices, security details or password(s) (including for Internet Banking and Telephone Banking) and not disclosing them to anyone, including the police, an account aggregation service that is not operated by us

Incidentally, that “not operated by us” is a nice piece of hubris. First Direct run their own account aggregation service which, of course, they trust implicitly. But they can’t possibly trust anybody else’s service.

I started talking about this on Twitter yesterday and I got this response from the @moneydashboard account. It largely ignores the security aspects and concentrates on why you shouldn’t worry about breaking your bank’s T&Cs. They seem to be campaigning to get T&Cs changed so allow explicit exclusions for sharing passwords with account aggregation services.

I think this is entirely wrong-headed. I think there is a better campaign that they should be running.

As I said above, I think that the idea of an account aggregation service is great. I would love to use something like Money Dashboard. But I’m completely unconvinced by their talk of security. They need access to your passwords in plain text. And it doesn’t matter that their application only reads your data. If someone can extract your login details from Money Dashboard’s systems then they can do whatever they want with your money.

So what’s the solution? Well I agree with one thing that Money Dashboard say in their statement:

All that you are sharing with Money Dashboard is data; data which belongs to you. You are the customer, you should be telling the bank what to do, not the other way around!

We should be able to tell our banks to share our data with third parties. But we should be able to do it in a manner that doesn’t entail giving anyone full access to our accounts. The problem is that there is only one level of access to your bank account. If you have the login details then you can do whatever you want. But what if there was a secondary set of access details – ones that could only read from the account?

If you’ve used the web much in recent years, you will have become familiar with this idea. For example, you might have wanted to give a web app access to your Twitter account. During this process you will be shown a screen (which, crucially, is hosted on Twitter’s web site, not the new app) asking if you want to grant rights to this new app. And telling you which rights you are granting (“This app wants to read your tweets.” “This app wants to tweet on you behalf.”) You can decide whether or not to grant that access.

This is called OAuth. And it’s a well-understood protocol. We need something like this for the finance industry. So that I can say to First Direct, “please allow this app to read my account details, but don’t let them change anything”. If we had something like that, then all of these problems will be solved. The Money Dashboard statement points to the Financial Data and Technology Association – perhaps they are the people to push for this change.

I know why Money Dashboard are doing what they are doing. And I know they aren’t the only ones doing it (Mint, for example, is a very popular service in the US). And I really, really want what they are offering. But just because a service is a really good idea, shouldn’t mean that you take technical short-cuts to implement it.

I think that the “Financial OAuth” I mentioned above will come about. But the finance industry is really slow to embrace change. Perhaps the Financial Data and Technology Association will drive it. Perhaps one forward-thinking bank will implement it and other bank’s customers will start to demand it.

Another possibility is that someone somewhere will lose a lot of money through sharing their details with a system like this and governments will immediately close them all down until a safer mechanism is in place.

I firmly believe that systems like Money Dashboard are an important part of the future. I just hope that they are implemented more safely than the current generation.


The post Financial Account Aggregation appeared first on Davblog.











A Talk from OpenTech 2015 about a tool I wrote for monitoring parliamentary candidates on Twitter during the 2015 UK general election.
books read

Perl by Example

Perl by Example
author: Ellie Quigley
name: David
average rating: 0.0
book published: 1994
rating: 0
read at:
date added: 2015/03/01
shelves: currently-reading




Perl in the Internet of Things

My training course from the 2014 London Perl Workshop

Return to the Kingdom of the Blind

A talk from the London Perl Workshop 2014

Github, Travis-CI and Perl

A quick introduction to using Github and Travis-CI to test Perl projects
books read

The Complete Works of H.P. Lovecraft

The Complete Works of H.P. Lovecraft
author: H.P. Lovecraft
name: David
average rating: 4.30
book published: 1978
rating: 0
read at:
date added: 2014/06/12
shelves: currently-reading


something went wrong


Feed Subscribe

Powered by Perlanet