Sponsored: 64% off Code Black Drone with HD Camera

Our #1 Best-Selling Drone--Meet the Dark Night of the Sky!

Oh, and to answer my own earlier question - Civil War has both mid- and post-credit scenes.


Important (non-spoiler) Civil War question: Why do they pronounce "Lagos" as "Lar-goss" instead of "Lay-goss"?


@sbisson I didn't see that. It's listed here -


No spoilers please, but are there any mid/post-credits scenes in Civil War?


Big thunder in South Quay (either that or an explosion)

books read

Three Brothers

Three Brothers
author: Peter Ackroyd
name: David
average rating: 3.10
book published: 2013
rating: 0
read at:
date added: 2016/04/25
shelves: currently-reading

perl hacks

Training in Cluj

I’m going to be running a day of training before YAPC Europe in Cluj. It’ll be on Tuesday 23rd August. But that’s all I know about the course so far, because I want your help to plan it.

Training has been a part of the YAPC experience for a long time. And I’ve often run courses alongside YAPC Europe. I took a look back through my talk archives and this is what I found.

The first two (the half-day courses) were both given as part of the main conference. The others were all separate courses run before the conference. For those, you needed to pay extra – but it was a small amount compared with normal Perl training rates.

So now it’s 2016 and I want to run a training course in Cluj. But what should it be about? That’s where you come it. I want you to tell me what you want training on.

I’m happy to update any of the courses listed above. Or, perhaps I could cover something new this year. I have courses that I have never given at YAPC – on Moose, testing, web development and other things. Or I’d be happy to come up with something completely new that you want to hear about.

Please comment on this post, telling me your opinions. I’ll let the discussion run for a couple of weeks, then I’ll collate the most popular-looking choices and run a poll to choose which course I’m going to run.

Don’t forget – training in Cluj on 23rd August. If you’re booking travel and accommodation for the conference then please take that into account.

Oh, and hopefully it won’t just be me. If you’re a trainer and you’re going to be in Cluj for the conference, then please get in touch and we’ll add you to the list. The more courses we can offer, the better.

So here’s your chance to control the YAPC training schedule. What courses would you like to see?

The post Training in Cluj appeared first on Perl Hacks.

perl hacks

Code Archaeology

Long-time readers will have seen some older posts where I criticised Perl code that I’ve found in various places on the web. I thought it was about time that I admitted to some of the dodgier corners of my programming career.

You may know that one of my hobbies is genealogy. You might also know that there’s a CPAN module for dealing with GEDCOM files and a mailing list for the discussion of the intersection of Perl and genealogy. The list is usually very quiet, but it woke up briefly a few days ago when Ron Savage asked for help reconstructing some old genealogy software of his that had gone missing from his web site. Once he recovered the missing files, I noticed that in the comments he credited a forgotten program of mine for giving him some ideas. This comment included a link to my web site which (embarrassingly) was now a 404. I don’t link to leave broken links on the web, so I swiftly put a holding page in place on my site and went off to find the missing directory.

It turns out that the directory had been used to distribute a number of my early ventures into open source software. The Wayback Machine had many of them but not everything. And then I remembered that I had full back-ups of some earlier versions of my web site squirrelled away somewhere and it only took an hour or so to track them down. So that I don’t mislay them again, I’ve put them all on Github – in an appropriately named repository.

I think that most of this code dates from around 2000-2003. There’s evidence that a lot of it was stored in CVS or Subversion at some time. But the original repositories are long gone.

So, what do we have there? And just how bad is it?

There’s a really old formmail program. And it immediately becomes apparent that when I wrote, not only did I not know as much Perl as I thought, but I was pretty sketchy on the basics of internet security as well. I can’t remember if I ever put it live but I really hope not.

Then there’s the “ms” suite of programs. My freelancing company is called Magnum Solutions and it amused me when I realised that people could potentially assume that this code came from Microsoft. I don’t think anyone ever did. Here, you’ll find the beginnings of what later became the nms project – but the nms versions are far more secure.

There’s the original slavorg bot from the IRC channel. The channel still has a similar bot, but the code has (thankfully) been improved a lot since this version.

Then there’s something just called spam. I think I was trying to get some stats on how much spam I was getting.

There are a couple of programs that date from my days wrangling Sybase in the City of London. There’s a replacement for Sybase’s own “isql” command line program. My version is called sqpl. I can’t remember what I didn’t like about isql, or how successful my replacement was. What’s interesting about this program is that there are two versions. One uses DBI to connect to the database, but the other uses Sybase’s own proprietary “CTlib” connection library. Proof, I guess that I was talking to databases back when DBI was too new and shiny to be trusted in production.

The other Sybase-related program is called sybserv. As I recall, Sybase uses a configuration file to define the connection details of the various servers that any given client can connect to. But the format of that file was rather opaque (I seem to remember the IP address being stored as a packed integer in some cases). This program parses this file and presents the data in a far more readable format. I remember using it a lot. I believe it’s the only Perl program I’ve ever written that uses formats.

Then there’s toc. That reads an HTML document, looking for any headers. It then builds a table of contents based on those headers and inserts it into the document. I think it’ll still work.

The final program is webged. This is the one that Ron got inspiration from. It parses a GEDCOM file and turns it into a web site. It works in two modes, you can either pre-generate a whole site (that’s the sane way to use it) or you can use it as a CGI program where it produces each page on the fly as it is requested. I remember that parsing the GEDCOM file was unusably slow, so I implemented an incredibly naive caching mechanism where I stored a Data::Dumper version of the GEDCOM object and just “eval”ed that. I was incredibly proud of myself at the time.

The code in most of these programs is terrible. Or, at least, it’s very much a product of its time. I can forgive the lack of “use warnings” (Perl 5.6 wasn’t widely used back when this code was written) as they all have “-w” instead. But it’s the use of ampersands on most of the subroutine calls that makes me cringe the most.

But please have fun looking at the code and pointing out all of the idiocies. Just don’t put any of the CGI programs on a server that is anywhere near the internet.

And feel free to share any of your early code.

The post Code Archaeology appeared first on Perl Hacks.


Ten Years?

It’s been some considerable time since I wrote anything about Nadine Dorries. I still keep an eye on what she’s up to, but most of the time it’s just the same old nonsense and it’s not worth writing about.

But I was interested to read her recent blog post explaining why she had given up Twitter (again). Of course, she uses it to rehash many of her old claims of stalking and the like, but what I found really interesting was when she said:

After almost ten years on Twitter (so long I can’t remember) and with 28,000 followers, I have made my own modest exit.

Because that “almost ten years” didn’t fit my recollections. Twitter has just had its tenth anniversary. As I wrote recently, almost no-one has been on Twitter for ten years – certainly not any British MPs.

It’s simple enough to use one of the many “how long have I been on Twitter?” sites to work out when her current @NadineDorriesMP account joined Twitter. It seems to be January 2012.

But that’s not the full story. She has joined and left Twitter a few times. Let’s see what we can find out.

Firstly, here’s a blog post from May 2009 where she doesn’t seem to be planning to join Twitter any time soon.

Anyway, safe to say, I shan’t be joining the legions of twitters any day soon.

It’s several months later, in September 2009, when she announces that she has joined Twitter. So that “ten years” is more like six and a half.

I’m pretty sure that first account was also called @NadineDorriesMP. At some point over the next couple of years, she closed that account (I’ll dig through her blog later to see if I can find any evidence to date that) and some time later she returned with a new account called @Nadine_MP. I know that because in May 2011 she gave up that second account and forgot to remove the Twitter widget from her web site. Then someone else took over the now-abandoned username and used it to deface her site. And then, as we saw above, she rejoined in January 2012.

So I think the list of Nadine’s Twitter accounts goes like this:

That last account is still registered. She just chooses not to use it any more. If past behaviour is anything to go by, she’ll be back at some point.

Anyway, here’s another good example of why you can’t trust anything that Dorries says. Even on a simple fact like how long she has been using Twitter, she just pulls numbers out of the air. She makes stuff up to suit her and she’s been doing it for years.

The post Ten Years? appeared first on Davblog.


Twitter’s Early Adopters

You’ll be seeing that tweet a lot over the next few days. It’s the first ever public tweet that was posted to the service we now know as Twitter. And it was sent ten years ago by Jack Dorsey, one of Twitter’s founders.

Today, Twitter has over a hundred million users, who send 340 million tweets a day (those numbers are almost certainly out of date already) but I thought it would be interesting to look back and look at Twitter’s earliest users.

Every Twitter user has a user ID. That’s an integer which uniquely identifies them to the system. This is a simple incrementing counter[1]. You can use a site like MyTwitterID to get anyone’s ID given their Twitter username. It’s worth noting that you can change your username, but your ID is fixed. When I registered a new account last week, I got an ID that was eighteen digits long. But back in 2006, IDs were far shorter. Jack’s ID, for example, is 12. That’s the lowest currently active ID on the system. I assume that the earlier numbers were used for test accounts.

Using the Twitter API you can write a program that will give you details of a user from their ID. Yesterday I wrote a simple program to get the details of the first 100,000 Twitter users (the code is available on Github). The results from running the program are online. That’s a list of all of the currently active Twitter users with an ID less than 100,000.

The first thing you’ll notice is that there are far fewer than you might expect. The API only returns details on currently active users. So anyone who has closed their account won’t be listed. I expected that perhaps 20-25% of accounts might fall into that category, but it was much higher than that.

There are 12,435 users in the file. That means that 87,500 of the first 100,000 Twitter accounts are no longer active. That was such a surprise to me that I assumed there was a bug in my program. But I can’t find one. It really looks like almost 90% of the early Twitter users are no longer using the service.

The dates that the account were created range from Jack‘s on 21st March 2006 to Jeremy Hulette (ID 99983 – the closest we have to 100,000) exactly nine months later on 21st December 2006.  I guess you could get a good visualisation of Twitter’s early growth by plotting ID against creation date – but I’ll leave that to someone else.

My file also contains location. But it’s important to note that I’m getting the location that is currently associated with that account – not the original location (I wonder if Twitter still have that information). I know a large number of people who were in London when they joined Twitter by who are now in San Francisco, so any conclusions you draw from the location field are necessarily sketchy. But bearing that in mind, here are some “firsts”.

That last one seems a little high to me. I might have missed someone earlier who didn’t put “UK” in their location.

So who’s on the list? Is there anyone famous? Not that I’ve seen yet. Oh, there are well-known geeks on the list. But no-one you’d describe as a celebrity. No musicians, no actors, no politicians, no footballers or athletes. I may have missed someone – please let me know if you spot anyone.

Oh, and I’m on the list. I’m at number 14753. I signed up (as @davorg) at 11:30 on Wednesday 22nd November 2006. I suspect I’m one of the first thousand or so Brits on the list – but it’s hard to be sure of that.

Anyway, happy birthday to Twitter. I hope that someone finds this data interesting. Let me know what you find.

[1] Actually, there’s a good chance that this is no longer the case – but it was certainly true back in 2006.

The post Twitter’s Early Adopters appeared first on Davblog.


Writing Books (The Easy Bit)

As seen at Floss UK Spring Conference 2016. How to create ebooks from Markdown.
perl hacks

Reviving WWW::Shorten

Last July I wrote a post threatening to cull some of my unused CPAN modules. Some people made sensible comments and I never got round to removing the modules (and I’m no longer planning to) but I marked the modules in question as “HANDOFF” and waited for the rush of volunteers.

As predicted in my previous post, I wasn’t overwhelmed by the interest, but this week I got an email from Chase Whitener offering to take over maintenance of my WWW::Shorten modules. I think the WWW::Shorten modules are a great idea, but (as I said in my previous post) the link-shortening industry moves very quickly and you would need to far more dedicated than I am in order to keep up with it. All too often I’d start getting failures for a module and, on investigation, discover that another link-shortening service had closed down.

So I was happy to hand over my modules to Chase. And in the week or so since he’s taken them over he’s been more proactive than I’ve been in the last five years.

All stuff that I’ve been promising myself that I’d get round to doing – for about the last five years. So I’m really grateful that the project seems to have been given the shot in the arm that I never had the time to give it. Thanks to Chase for taking it on.

Looking at CPAN while writing this post, I see that there are two pages of modules in the WWW::Shorten namespace – and only about 20% of them are modules that I wrote or inherited from SPOON. It’s great to see so many modules based on my work. However, I have no idea how many of them still work (services close down, HTML changes – breaking web scrapers). It would be great if the authors could all work together to share best practice on keeping up with this fast-moving industry. Perhaps the p5-shorten Github organisation would be a good place to do that.

Anyway, that’s seven fewer distributions that I own on CPAN. And that makes me happy. Many thanks to Chase for taking them on.

Now. Who wants Guardian::OpenPlatform::API, Net::Backpack or the AudioFile::Info modules?

The post Reviving WWW::Shorten appeared first on Perl Hacks.



perl hacks

Training Debrief

During the second week of February, I ran my (approximately) annual public Perl training courses in association with FlossUK. Things were organised slightly differently this year. Previously we’ve run two two-day “general purpose” courses – one on Intermediate Perl and one on Advanced Perl. This year we ran four courses, each of which were on a more specific technology. There were one-day courses on Moose, DBIx::Class and testing and a two-day course on web development.

Class numbers weren’t huge during the week. We had about six people on each of the days. That’s a large enough class to get plenty of discussion and interaction going but, to be honest, I’d be happier if we got a few more people in. The attendees were split pretty much down the middle between people who working for commercial organisations and people who worked for universities. I’m sorry to report that there were no women booked on any of the courses this year.

As is often the case on these courses most of the attendees had been using Perl for a long time and were pretty comfortable with some quite advanced Perl features. But, for various reasons, they simply hadn’t had the time to investigate some of the newer Perl tools that would have made their lives much easier. I often get people at these courses telling me that the best thing about the course is just having a day set aside where they can try out cool new technologies that they have heard of without the worry of someone calling them away to deal with some vital production issue (this, incidentally, is also why most trainers far prefer off-site training courses).

We started on day one with Moose. Most of them had used “classic” Perl OO techniques and were well aware of how baroque that can become. They were therefore very interested in the more declarative approach that Moose gave them. By the end of the day our Dalek class was using roles, traits, type coercion and many other useful Moose features.

Day two was DBIx::Class. Everyone was using a database of some kind and all of them were using DBI to interface to their database. I really enjoy introducing people like that to DBIx::Class. Once they’ve run dbicdump and have generated classes for their own databases most people’s eyes light up when they see ho much easier their code can become. As a bonus, this class contained no-one who needed to be persuaded of the benefits of defining foreign keys in your DDL and making use of referential integrity checks.

The third day was testing. I mean, it was about testing – not that it was particularly difficult. The class was full of people who knew the benefits of testing but who were maintaining large codebases with hardly any (in some cases no) tests. In the morning we looked at how simple the Perl testing framework is, did a quick survey of some useful testing modules on CPAN and even looked at writing our own testing modules. In the afternoon we expanded that to look at mocking objects and Test::Class. I think the most popular sections were when I introduced Devel::Cover and the concept of continuous integration. I encouraged them to write even just a few tests and to hook their test suite up to a highly visible Jenkins job. If you make your lack of test coverage obvious, then other people in the team can be encouraged to help improve it out of sheer embarrassment.

Thursday was the first day of the two-day course on web development. The first day concentrated on PSGI and Plack. We looked at what they are and how they make web development simpler. We also looked at ways to run non-PSGI applications in a PSGI environment in order to benefit from PSGI’s advantages. This seemed to really engage a couple of people in the class who used a practical session at the end of the day to start working to get their own legacy apps running under PSGI. I was particularly pleased when, the next morning, one of them told me that he had continued to work on the problem overnight and that he had got a huge system that used a combination of CGI and mod_perl working under PSGI. He was really happy too.

On the final day, we looked at web frameworks in Perl. The morning was all about Dancer2. We started by building a small web app and I showed them how simple it was to interface with a database and to add authentication to the system. Later on we added an API to the app so that it could return JSON or XML instead of web pages. Early in the afternoon, I took that a step further and demonstrated Web::Machine and WebAPI::DBIC. The rest of the afternoon was about Catalyst. We built another app (similar to the Dancer on from the morning) using the standard Catalyst tutorial as a basis. I’m not sure how well this went, to be honest. Following the simplicity of Dancer with the (relative) complexity of Catalyst wasn’t, perhaps, the best advert for Catalyst.

But, all in all, I think the week went really well. I sent a small but enthusiastic group of people back to their offices with a new interest in using Modern Perl tools in their day-to-day work. And, perhaps more usefully, I think that many of them will be getting more involved in the Perl community. A few of them said “see you at the LPW” as they left.

I’m running a half-day workshop on Modern Perl Web Development at  the FlossUK Spring Conference next month. Other than that, I don’t have any public courses planned. But if you think that your company would find my training useful, then please get in touch.

The post Training Debrief appeared first on Perl Hacks.




My Family in 1939

Here in the UK, a census has been taken almost every ten years since 1841. There were a few censuses before that, but before 1841 they only counted people – they didn’t include lists of names.

These census records are released 100 years after the date of the census and this data is of great interest to genealogists. The most recent census that we have access to is from 1911 and the one from 1921 will be released at the start of 2022.

But occasionally, other records emerge that are almost as useful as a census. For example, in September 1939, on the eve of the Second World War, the British government took a national register which was used to issue identity cards to everyone.

Last November, FindMyPast made the contents of this register available to everyone. Initially I didn’t look at it as I have a FindMyPast subscription and I was annoyed that this didn’t cover the new records. I assumed that eventually the new data would be rolled into my existing subscription, so I decided to wait.

I didn’t have to wait very long. Yesterday I got access to the records. So I settled down last night to find out what I could about my ancestors in 1939. As it turned out, it didn’t take long. There were only ten of them and they were split across four households.


This is most of my father’s family. You can see his parents, James and Ivy Cross. They are living with Ivy’s parents George and Lily Clarke. George worked for Greene King all of his life (for over sixty years) and this is the last job he did for them – running an off-licence in Holland-on-Sea. James and Ivy lived in the same building until James died in 1970. I remember spending a lot of time there when I was a child. I even have vague memories of George who died when I was three or four.

My father was born three months after this register was taken – in January 1940 – so it’s interesting to note that Ivy is, at this time, six months pregnant.


Just down the road are the rest of my father’s family – James’ parents Albert and Lily Cross living with their daughter (my great-aunt) Grace. Albert’s father (another James) was the lifeboatman who I have written about before.


Looking a bit further afield, we find most of my mother’s family living in Thorpe-le-Soken. You’ll see my great-grandparents, Robert and Agnes Sowman, along with three closed records. Records are closed if the people in them are born less than 100 years ago and aren’t known to have died. The first two closed records here are my grandmother, Cecilia, and her sister Margaret. Both of these woman are no longer alive, so I should be able to get FindMyPast to open these records by sending them copies of their death certificates. The third closed record will be for Constance, the third daughter in the family.


And finally, here’s the final part of my family. Maud Turpin, living alone in Maldon. Maud is Agnes Sowman’s mother. Actually, this record showed me the only piece of information that I didn’t already know. Previously, I wasn’t sure when Maud’s husband Alfred died. He was still alive in the 1911 census and this record gives me strong evidence that he died before 1939. I think I’ve found a good candidate for his death record in 1931.

So that’s a pretty good summary of what you’ll find in the 1939 register. It’s a good substitute for a census (particularly as there was no census in 1941 – as the country was too busy fighting a war) and it’s nice that it’s not covered by census privacy laws, so it has been released to the public about 25 years sooner than you might expect. But, certainly in my case, I already had a lot of knowledge about my family in this period so I didn’t learn very much that was new. If I had paid the £7 per household that FindMyPast had initially asked for, I think I would have been very disappointed.

I should point out that You don’t just get this information. Each results page gives a map (actually, a selection of maps) showing where your ancestors lived. This is a nice touch. There are also random newspaper cuttings and photos from the locality. You might find these interesting – I really didn’t.

Has anyone else used these records yet? Have you found anything interesting?

p.s. And yes, if you’re paying close attention, you’ll notice that there’s one grandparent missing from my list above. Ask me about that in the pub one day.

The post My Family in 1939 appeared first on Davblog.

perl hacks

Why Learn Perl?

A couple of months ago I mentioned some public training courses that I’ll be running in London next month. The courses are being organised by FlossUK and since the courses have been announced the FlossUK crew have been running a marketing campaign to ensure that as many people as possible know about the courses. As part of that campaign they’ve run some sponsored tweets, so information about the courses will have been displayed to people who previously didn’t know about them (that is, after all, the point of marketing).

And, in a couple of cases, the tweet was shown to people who apparently weren’t that interested in the courses.

As you’ll see, both tweets are based on the idea that Perl training is pointless in 2016. Presumably because Perl has no place in the world of modern software development. This idea is, of course, wrong and I thought I’d take some time to explain why it is so wrong.

In order for training to be relevant, I think that two things need to be true. Firstly the training has to be in a technology that people use and secondly there needs to be an expectation that some people who use that technology aren’t as expert in as they would like to be (or as their managers would like them to be). Let’s look at those two propositions individually.

Do people still use Perl? Seems strange that I even have to dignify that question with a response. Of course people still use Perl. I’m a freelance programmer who specialises in Perl and I’m never short of people wanting me to work for them. I won’t deny that the pool of Perl-using companies has got smaller in the last ten years, but they are still out there. And they are still running successful businesses based on Perl.

So there’s no question that Perl satisfies the first of our two points. You just have to look at the size of the Perl groups on Facebook or LinkedIn to see that plenty of people are still using Perl. Or come along to a YAPC and see how many companies are desperate to employ Perl programmers.

I think it’s the second part of the question that is more interesting. Because I think that reveals what is really behind the negative attitude that some people have towards Perl. Are there people using Perl who don’t know all they need to know about it?

Think back to Perl’s heyday in the second half of the 1990s. A huge majority of dotcoms were using Perl to power their web sites. And because web technologies were so new, most of the Perl behind those sites was of a terrible standard. They were horrible monolithic CGI programs with hard-coded HTML within the Perl code (thereby making it almost impossible for designers to improve the look of the web site). When they talked to databases, they used raw SQL that was also hard-coded into the source. The CGI technology itself meant that as soon as your site became popular, your web server was spawning hundreds of Perl processes every minute and response times ballooned. So we switched to mod_perl which meant rewriting all of the code and in many cases the second version was even more unmaintainable than the first.

It’s not surprising that many people got a bad impression of Perl. But any technology that was being used back then had exactly the same problems. We were all learning on the job.

Many people turned their backs on Perl at that point. And, crucially, stopped caring what was going on in Perl development. And like British ex-pats who think the UK still works the way it did when they left in the 1960s, these people think the state of the art in Perl web development is those balls of mud they worked on fifteen or twenty years ago.

And it’s not like that at all. Perl has moved on. Perl has all of the tools that you’d expect to see in any modern programming language. Moose is as good as, if not better than, the OO support in any other language. DBIx::Class is as flexible an ORM as you’ll find anywhere. Plack and PSGI make writing web apps in Perl as easy as it is in any other language. Perl has always been the magpie language – it would be crazy to assume that it hasn’t stolen all the good ideas that have emerged in other languages over the last fifteen years. It has stolen those ideas and in many cases it has improved on them.

All of which brings us back to my second question. Are there people out there who need to learn more about Perl? Absolutely there are. The two people whose tweets I quoted above are good examples. They appear to have bought into the common misconception that Perl hasn’t changed since Perl 5 was released over twenty years ago.

That’s often what I find when I run these courses. There are people out there with ten or fifteen years of Perl experience who haven’t been exposed to all of the great Modern Perl tools that have been developed in the last ten years. They think they know Perl, but their eyes are opened after a couple of hours on the course. They go away with long lists of tools that they want to investigate further.

I’m not saying that everyone should use Perl. If you’re comfortable using other technologies to get your job done, then that’s fine, of course. But if you haven’t followed Perl development for over ten years, then please don’t assume that you know the current state of the language. And please try to resist making snarky comments about things that you know nothing about.

If, on the other hand, you are interesting in seeing how Perl has changed in recent years and getting an overview of the Modern Perl toolset, then we’d love to see you on the courses.

The post Why Learn Perl? appeared first on Perl Hacks.


2015 in Gigs

As has become traditional round these parts, it’s time for my annual review of the gigs I saw last year.

I saw 48 gigs in 2015. That’s up on 2014’s 45, but still short of my all time high of 60 in 2013. I saw Chvrches, Stealing Sheep and Paper Aeroplanes twice. I was supposed to see a couple of other artists twice, but Natalie Prass cancelled the second show and I couldn’t get to the second Soak show as I was ill.

As always, there were some disappointments. Renaissance really weren’t very good (I waited to hear “Northern Lights” and then buggered off) and Elbow weren’t as good as I’d seen them before. But the biggest disappointment this year has to be Bob Dylan. He was terrible. I left at the interval.

About half-way through the year, I stopped writing reviews on my gig site. I’ve put up posts with just the data about the shows and I hope to back-fill some of the reviews at some point, but I can’t see it happening soon. Hopefully I’ll keep the site more up to date this year.

So here (in chronological order) are my favourite gigs of the year:

Gigs that fell just outside of the top ten included Julian Cope, Suzanne Vega, Paper Aeroplanes and Smoke Fairies. Oh, and the Indie Daze Festival was great too.

I already have tickets for a dozen shows in 2016. I’m particularly looking forward to ELO in April and seeing the Cure for the first time for far too many years in December.

The post 2015 in Gigs appeared first on Davblog.




Modern Web Development with Perl

The training course I ran at the 2015 London Perl Workshop

Improving Dev Assistant

My lightning talk from the 2015 London Perl Workshop

Conference Driven Publishing

My talk from the 2015 London Perl Workshop

Doctor Who Festival

In 2013, to celebrate the 50th anniversary of Doctor Who, the BBC put on a big celebration at the Excel centre in London’s Docklands. They must have thought that it went well as this year they decided to do it all over again at the Doctor Who Festival which took place last weekend. Being the biggest Doctor Who fan I know, I was at both events and I thought it might be interesting to compare them.

Each event ran over three days (Friday to Sunday). I visited both events on the Sunday on the basis that there would be one more episode of the show to talk about. This was particularly important in 2013 when the 50th anniversary special was broadcast on the Saturday night.


Let’s start with the basics. This years event was more expensive than the 2013 one. And the price increases were both large and seemingly random. Here’s a table comparing the prices.

Standard Tardis
Adult Child Family Adult Child Family
2013 £45.00 £20.00 £104.00 £95.50 £44.25 £218.00
2015 £68.00 £32.35 £171.00 £116.00 £52.75 £293.00
Increase 51.11% 61.75% 64.42% 21.47% 19.21% 34.40%

You’ll see that some prices “only” went up by about 20% while others increased by an eye-watering 65%. There’s obviously money to be made in these events. And, equally obviously, Doctor Who fans are happy to pay any price for entrance to these events. I don’t know about you, but those increases over two years where inflation has hovered around 0% scream “rip-off” to me.

You’ll notice that I’ve quoted prices for two different types of ticket. There are standard tickets and “Tardis” tickets. Tardis tickets give you certain extras. We’ll look at those next.

Tardis Tickets

I’ll admit here that I went for the Tardis ticket both times. The big advantage that this ticket gives you is that in the big panels (and we’ll see later how those panels are the main part of the days) the front eight or so tickets are reserved for Tardis ticket holders. So if you have a Tardis ticket you are guaranteed to be close enough to see the people on  the stage. Without a Tardis ticket you can be at the far end of the huge hall where you might be able to make out that some people are on the stage, but you’ll be relying on the big video screens to see what is going on.

To me, that’s the big advantage of the Tardis ticket. Does it justify paying almost double the standard ticket price? I’m not sure. But you get a couple of other advantages. You get a free goodie bag. In 2013, that contained a load of tat (postcards, stickers, a keyfob, stuff like that) that I ended up giving away. This year we got the show book (which was pretty interesting and very nearly worth the £10 they were charging for it) and a t-shirt (which was being sold on the day for £25). So the 2015 goodie bag was a massive improvement on the 2013 one.

Tardis ticket-holders also got access to a special lounge were you could relax and partake of free tea, coffee and biscuits. In 2013 this was in a private area away from the rest of the show. This year it was a cordoned off corner of the main exhibition hall which didn’t seem like quite so much of a haven of calm.

Main Panels

The main structure of the day is made up of three big discussion panels that are held in a huge room. Each panel is run twice during the day, but when you buy your ticket you know which time you’ll be seeing each panel.

Each panel has people who are deeply involved in the show. In 2013 we had the following panels:

This year we had:

Both sets of panels were equally interesting. Having the former Doctors taking apart in the 50th anniversary year made a lot of sense.

Exhibition Hall

The other main part of the event was an exhibition hall where various things were taking place. I think this was disappointing this year. Here are some comparisons:

Sets from the show

As far as I can remember, in 2013 there was only the entrance to Totter’s Yard and the outside of a Tardis. This year there was Davros’ hospital room, Clara’s living room and the outside of a Tardis (although this clearly wasn’t a “real” Tardis – the font on the door sign was terrible). So there were more sets this year, but I rather questioned their description of Clara’s living room as an “iconic” set.


There were a lot of opportunities to buy stuff, but it seemed to me that there were rather fewer stalls there this year. Merchandise seemed to fall into two categories. There was stuff that you would have been better off buying from Amazon (DVDs, board games, books, stuff like that). And there was really expensive stuff. I really can’t justify spending £60 or £80 for incredibly intricate replicas of props from the show or £200(!) for a copy of one of the Doctor’s coats.

There was one big exception to the “cheaper on Amazon” rule. The BBC shop had a load of classic DVDs on sale for £6 each.

In 2013 I bought a couple of postcards. This year I managed to resist buying anything. But I appeared to be rather unusual in that – there were a lot of people carrying many large bags of stuff.

Other Stages

Both years, around the edge of the main hall there were areas where other talks and workshops were taking place. This years seemed slightly disappointing. For example, on one stage in 2013 I saw Dick Maggs giving an interesting talk about working with Delia Derbyshire to create the original theme tune. The equivalent area this year had a group of assistant directors giving a list of the people who work on set when an episode of the show is being made.

In 2013, the centre of this room was given over to an area where many cast members from the show’s history were available for autographs and photos. This year, that’s where Clara’s living room was set up. In fact the four cast members who were in the panel I mentioned above were the only cast members who were involved in this event at all. I realise that it makes more sense for there to be lots of cast members involved in the 50th anniversary celebrations, but surely there were some other current cast members who could have turned up and met their fans.

Also in this hall was an area where the Horror Channel (who are the current home of Classic Doctor Who in the UK) were showing old episodes. There was something similar in 2013, but (like the Tardis lounge) it was away from the main hall. Moving this and the Tardis lounge to the main hall made me think that they were struggling a bit to fill the space.

In Summary

This year’s event was clearly a lot more expensive than the one in 2013 and I think attendees got rather less for their money. All in all I think it was slightly disappointing.

The big panels are clearly the centrepiece of the event and they are well worth seeing. But I think you need a Tardis ticket in order to guarantee getting a decent view. Oh, yes you can get in the ninth row without a Tardis ticket, but you’d be competing with a lot of people for those seats. You’d spend the whole day queuing to stand a chance of getting near the front.

I don’t know what the BBC’s plans for this event are, but it’s clearly a good money-spinner for them and I’d be surprised if they didn’t do it again either next year or in 2017. And the fans don’t really seem to mind how much they pay to attend, so it’ll be interesting to see how the next one is priced.

I think that the big panels still make the event worth attending, but there’s really not much else that I’m interested in. So I’m undecided as to whether I’d bother going again in the future.

Were you are the event? What did you think of it? How much money did you spend in total?

The post Doctor Who Festival appeared first on Davblog.


Conference Driven Publishing

A talk I gave at the London Perl Mongers Technical Meeting on 13th August 2015






Dave Cross posted a photo:




Dave Cross posted a photo:



Stray Dog in Antisiranana

Dave Cross posted a photo:

Stray Dog in Antisiranana


Feed Subscribe

Powered by Perlanet