twitter

Oh. Actually, that's not true. Going to a wedding blessing ceremony this afternoon. Somewhere in deepest Walworth.

twitter

Got a ticket for Deadpool tomorrow. But today, I think, a little light genealogy.

twitter

First weekend this week that I don't have anything specific to get done. Which is nice.

twitter

@WithingsSupport That was the problem that I had. All fixed now.

twitter

@WithingsSupport What's New for the new release says "If you suffered, from the 2.12 crash, we are really sorry for the inconvenience!"

books read

Take Back the Skies (Take Back the Skies, #1)

Take Back the Skies (Take Back the Skies, #1)
author: Lucy Saxon
name: David
average rating: 3.47
book published: 2014
rating: 0
read at:
date added: 2016/02/05
shelves: currently-reading
review:

perl hacks

Why Learn Perl?

A couple of months ago I mentioned some public training courses that I’ll be running in London next month. The courses are being organised by FlossUK and since the courses have been announced the FlossUK crew have been running a marketing campaign to ensure that as many people as possible know about the courses. As part of that campaign they’ve run some sponsored tweets, so information about the courses will have been displayed to people who previously didn’t know about them (that is, after all, the point of marketing).

And, in a couple of cases, the tweet was shown to people who apparently weren’t that interested in the courses.

As you’ll see, both tweets are based on the idea that Perl training is pointless in 2016. Presumably because Perl has no place in the world of modern software development. This idea is, of course, wrong and I thought I’d take some time to explain why it is so wrong.

In order for training to be relevant, I think that two things need to be true. Firstly the training has to be in a technology that people use and secondly there needs to be an expectation that some people who use that technology aren’t as expert in as they would like to be (or as their managers would like them to be). Let’s look at those two propositions individually.

Do people still use Perl? Seems strange that I even have to dignify that question with a response. Of course people still use Perl. I’m a freelance programmer who specialises in Perl and I’m never short of people wanting me to work for them. I won’t deny that the pool of Perl-using companies has got smaller in the last ten years, but they are still out there. And they are still running successful businesses based on Perl.

So there’s no question that Perl satisfies the first of our two points. You just have to look at the size of the Perl groups on Facebook or LinkedIn to see that plenty of people are still using Perl. Or come along to a YAPC and see how many companies are desperate to employ Perl programmers.

I think it’s the second part of the question that is more interesting. Because I think that reveals what is really behind the negative attitude that some people have towards Perl. Are there people using Perl who don’t know all they need to know about it?

Think back to Perl’s heyday in the second half of the 1990s. A huge majority of dotcoms were using Perl to power their web sites. And because web technologies were so new, most of the Perl behind those sites was of a terrible standard. They were horrible monolithic CGI programs with hard-coded HTML within the Perl code (thereby making it almost impossible for designers to improve the look of the web site). When they talked to databases, they used raw SQL that was also hard-coded into the source. The CGI technology itself meant that as soon as your site became popular, your web server was spawning hundreds of Perl processes every minute and response times ballooned. So we switched to mod_perl which meant rewriting all of the code and in many cases the second version was even more unmaintainable than the first.

It’s not surprising that many people got a bad impression of Perl. But any technology that was being used back then had exactly the same problems. We were all learning on the job.

Many people turned their backs on Perl at that point. And, crucially, stopped caring what was going on in Perl development. And like British ex-pats who think the UK still works the way it did when they left in the 1960s, these people think the state of the art in Perl web development is those balls of mud they worked on fifteen or twenty years ago.

And it’s not like that at all. Perl has moved on. Perl has all of the tools that you’d expect to see in any modern programming language. Moose is as good as, if not better than, the OO support in any other language. DBIx::Class is as flexible an ORM as you’ll find anywhere. Plack and PSGI make writing web apps in Perl as easy as it is in any other language. Perl has always been the magpie language – it would be crazy to assume that it hasn’t stolen all the good ideas that have emerged in other languages over the last fifteen years. It has stolen those ideas and in many cases it has improved on them.

All of which brings us back to my second question. Are there people out there who need to learn more about Perl? Absolutely there are. The two people whose tweets I quoted above are good examples. They appear to have bought into the common misconception that Perl hasn’t changed since Perl 5 was released over twenty years ago.

That’s often what I find when I run these courses. There are people out there with ten or fifteen years of Perl experience who haven’t been exposed to all of the great Modern Perl tools that have been developed in the last ten years. They think they know Perl, but their eyes are opened after a couple of hours on the course. They go away with long lists of tools that they want to investigate further.

I’m not saying that everyone should use Perl. If you’re comfortable using other technologies to get your job done, then that’s fine, of course. But if you haven’t followed Perl development for over ten years, then please don’t assume that you know the current state of the language. And please try to resist making snarky comments about things that you know nothing about.

If, on the other hand, you are interesting in seeing how Perl has changed in recent years and getting an overview of the Modern Perl toolset, then we’d love to see you on the courses.

The post Why Learn Perl? appeared first on Perl Hacks.

davblog

2015 in Gigs

As has become traditional round these parts, it’s time for my annual review of the gigs I saw last year.

I saw 48 gigs in 2015. That’s up on 2014’s 45, but still short of my all time high of 60 in 2013. I saw Chvrches, Stealing Sheep and Paper Aeroplanes twice. I was supposed to see a couple of other artists twice, but Natalie Prass cancelled the second show and I couldn’t get to the second Soak show as I was ill.

As always, there were some disappointments. Renaissance really weren’t very good (I waited to hear “Northern Lights” and then buggered off) and Elbow weren’t as good as I’d seen them before. But the biggest disappointment this year has to be Bob Dylan. He was terrible. I left at the interval.

About half-way through the year, I stopped writing reviews on my gig site. I’ve put up posts with just the data about the shows and I hope to back-fill some of the reviews at some point, but I can’t see it happening soon. Hopefully I’ll keep the site more up to date this year.

So here (in chronological order) are my favourite gigs of the year:

Gigs that fell just outside of the top ten included Julian Cope, Suzanne Vega, Paper Aeroplanes and Smoke Fairies. Oh, and the Indie Daze Festival was great too.

I already have tickets for a dozen shows in 2016. I’m particularly looking forward to ELO in April and seeing the Cure for the first time for far too many years in December.

The post 2015 in Gigs appeared first on Davblog.

perl hacks

Easy PSGI

When I write replies to questions on StackOverflow and places like that recommending that people abandon CGI programs in favour of something that uses PSGI, I often get some push-back from people claiming that PSGI makes things far too complicated.

I don’t believe that’s true. But I think I know why they say it. I think they say it because most of the time when we say “you should really port that code to PSGI” we follow up with links to Dancer, Catalyst or Mojolicious tutorials.

I know why we do that. I know that a web framework is usually going to make writing a web app far simpler. And, yes, I know that in the Plack::Request documentation, Miyagawa explicitly says:

Note that this module is intended to be used by Plack middleware developers and web application framework developers rather than application developers (end users).

Writing your web application directly using Plack::Request is certainly possible but not recommended: it’s like doing so with mod_perl’s Apache::Request: yet too low level.

If you’re writing a web application, not a framework, then you’re encouraged to use one of the web application frameworks that support PSGI (http://plackperl.org/#frameworks), or see modules like HTTP::Engine to provide higher level Request and Response API on top of PSGI.

And, in general, I agree with him wholeheartedly. But I think that when we’re trying to persuade people to switch to PSGI, these suggestions can get in the way. People see switching their grungy old CGI programs to a web framework as a big job. I don’t think it’s as scary as they might think, but I agree it’s often a non-trivial task.

Even without using a web framework, I think that you can get benefits from moving software to PSGI. When I’m running training courses on PSGI, I emphasise three advantages that PSGI gives you over other Perl web development environments.

  1. PSGI applications are easier to debug and test.
  2. PSGI applications can be deployed in any environment you want without changing a line of code.
  3. Plack Middleware

And I think that you can benefit from all of these features pretty easily, without moving to a framework. I’ve been thinking about the best way to do this and I think I’ve come up with a simple plan:

That’s all you need. You can drop your new program into your cgi-bin directory and it will just start working. You can immediately benefit from easier testing and later on, you can easily deploy your application in a different environment or start adding in middleware.

As an experiment to find how easy this was, I’ve been porting some old CGI programs. Back in 2000, I wrote three articles introducing CGI programming for Linux Format. I’ve gone back to those articles and converted the CGI programs to PSGI (well, so far I’ve done the programs from the first two articles – I’ll finish the last one in the next day or so, I hope).

It’s not the nicest of code. I was still using the CGI’s HTML generation functions back then. I’ve replaced those calls with HTML::Tiny. And they aren’t very complicated programs at all (they were aimed at complete beginners). But I hope they’ll be a useful guide to how easy it is to start using PSGI.

My programs are on Github. Please let me know what you think.

If you’re interested in modern Perl Web Development Techniques, you might find it useful to attend my upcoming two-day course on the subject.

Update: On Twitter, Miyagawa reminds me that you can use CGI::Emulate::PSGI or CGI::PSGI to run CGI programs under PSGI without changing them at all (or, at least, changing them a lot less than I’m suggesting here). And that’s what I’d probably do if I had a large amount of CGI code that I wanted to to move to PSGI quickly. But I still think it’s worth showing people that simple PSGI programs really aren’t any more complicated than simple CGI programs.

The post Easy PSGI appeared first on Perl Hacks.

perl hacks

London Perl Workshop Review

(Photo by Mark Keating)

Last Saturday was the annual London Perl Workshop. And, as always, it was a great opportunity to soak up the generosity, good humour and all-round-awesomeness of the European Perl community. I say “European” as the LPW doesn’t just get visitors from London or the UK. There are many people who attend regularly from all over Europe. And, actually, from further afield – there are usually two or three Americans there.

I arrived at about twenty to nine, which gave me just enough time to register and say hello to a couple of people before heading to the main room for Mark Keating’s welcome. Mark hinted that with next year’s workshop being the tenth that he will have organised, he is starting to wonder if it’s time for someone else to take over. More on that later.

I then had a quick dash back down to the basement where I was running a course on Modern Web Development with Perl. It seemed to go well, people seemed engaged and asked some interesting questions. Oh, and my timing was spot on to let my class out two minutes early so that they were at the front of the queue for the free cakes (courtesy of Exonetric). That’s just my little trick for getting slightly higher marks in the feedback survey.

After the coffee break I was in the smaller lecture theatre for three interesting talks – Neil Bowers on Boosting community engagement with CPAN‎ (and, yes, I’ve finally got round to signing up for the CPAN Pull Request Challenge), Smylers on Code Interface Mistakes to Avoid‎ and Neil Bowers (again) on ‎Dependencies and the River of CPAN‎ which was an interesting discussion on how the way you maintain a CPAN module should change as it becomes more important to more people.

Then it was lunch, which I spent in the University cafeteria catching up with friends.

After lunch, I saw Léon Brocard on Making your website seem faster, followed by Steve’s Man Publishing Pint, which turned out to be about publishing ebooks to Amazon easily – something which I’ve been very interested in recently.

The schedule was in a bit of a state of flux, so I missed Andrew Solomon’s talk on How to grow a Perl team‎ and instead saw Steve Mynott talking about Perl 6 Grammars. Following that, I gave my talk on Conference Driven Publishing (which is part apology for not writing the book I promised to write at the last LPW and part attempt to get more people writing and publishing ebooks about Perl).

Then there was another coffee break which I spent getting all the latest gossip from some former colleagues. We got so caught up in it that I was slightly late for Theo van Hoesel’s talk Dancer2 REST assured. I like Theo’s ideas but (as I’ve told him face to face) I would like to see a far simpler interface.

Next up was the keynote. Liz Mattijsen stood in for Jonathan Worthington (who had to cancel at the last minute) and she explained the history of her involvement in Perl and how she was drawn to working on Perl 6. She finished with a brief overview of some interesting Perl 6 features.

Then there were the lightning talks which were their usual mixture of useful, thought-provoking and insane.

Mark Keating closed the conference by thanking everyone for their work, their sponsorship and their attendance. He returned to the theme of perhaps passing on the organisation of the workshop to someone new. No-one, I think, can fail to be incredibly grateful for the effort that Mark has put into organising the last nine workshops and it makes complete sense to me that he can’t maintain that level of effort forever. So it makes sense to start looking for someone else to take over organising the workshop in the future. And, given the complexity of the task, it would be sensible if that person got involved as soon as possible so that we could have a smooth transition during the organisation of next year’s event.

If you’re interested in becoming a major hero to the European Perl community, then please get in touch with Mark.

There was no planned post-workshop event this year. So we broke up into smaller groups and probably colonised most of central London. Personally, I gathered a few friends and wandered off to my favourite restaurant in Chinatown.

I can only repeat what Mark said as he closed the workshop and give my thanks to all of the organisers, volunteers, speakers, sponsors and attendees. There’s little doubt in my mind that the LPW is, year after year, one of the best grass-roots-organised events in the European geek calendar. And this year’s was as good as any.

The post London Perl Workshop Review appeared first on Perl Hacks.

perl hacks

The Long Death of CGI.pm

CGI.pm has been removed from the core Perl distribution. From 5.22, it is no longer included in a standard Perl installation.

There are good technical reasons for this. CGI is a dying technology. In 2015, there are far better ways to write web applications in Perl. We don’t want to be seen to encourage the use of a technology which no-one should be using.

This does lead to a small problem for us though. There are plenty of web hosting providers out there who don’t have particularly strong Perl support. They will advertise that they support Perl, but that’s just because they know that Perl comes as a standard part of the operating system that they run on their servers. They won’t do anything to change their installation in any way. Neither you nor I would use a hosting company that works like that – but plenty of people do.

The problem comes when these companies start to deploy an operating system that includes Perl 5.22. All of a sudden, those companies will stop including CGI.pm on their servers. And while we don’t want to encourage people to use CGI.pm (or, indeed, the CGI protocol itself) we need to accept that there are thousands of sites out there that have been happily using software based on CGI.pm for years and the owners of these sites will at some point change hosting providers or upgrade their service plan and end up on a server that has Perl 5.22 and doesn’t have CGI.pm. And their software will break.

I’ve always assumed that this problem is some time in the future. As far as I can see, the only mainstream Linux distribution that currently includes Perl 5.22 is Fedora 23. And you’d need to be pretty stupid to run a web hosting business on any version of Fedora. Fedora is a cutting edge distribution with no long term support. Versions of Fedora are only supported for about a year after their release.

So the problem is in the future, but it is coming. At some point Perl 5.22 or one of its successors will make it into Red Hat Enterprise Linux. And at that point we have a problem.

Or so I thought. But that’s not the case. The problem is here already. Not because of Perl 5.22 (that’s still a year or two in the future for most of these web hosting companies) but because of Red Hat.

Red Hat, like pretty much everyone, include Perl in their standard installation. If you install any Linux distribution based on Red Hat, then the out of the box installation includes an RPM called “perl”. But it’s not really what you would recognise as Perl. It’s a cut down version of Perl. They have stripped out many parts of Perl that they consider non-essential. And those parts include CGI.pm.

This change in the way they package Perl started with RHEL 6 – which comes with Perl 5.10. And remember it’s not just RHEL that is affected. There are plenty of other distributions that use RHEL as a base – Centos, Scientific Linux, Cloud Linux and many, many more.

So if someone uses a server running RHEL 6 or greater (or another OS that is based on RHEL 6 or greater) and the hosting company have not taken appropriate action, then that server will not have CGI.pm installed.

What is the “appropriate action” you ask. Well it’s pretty simple. Red Hat also make another RPM available that contains the whole Perl distribution. So bringing the Perl up to scratch on a RHEL host is as simple as running:

yum install perl-core

That will work on a server running RHEL 6 (which has Perl 5.10) and RHEL 7 (which has Perl 5.16). On a future version of RHEL which includes Perl 5.22 or later, that obviously won’t work as CGI.pm won’t be part of the standard Perl installation and therefore won’t be included in “perl-core”. At that point it will still be a good idea to install “perl-core” (to get the rest of the installation that you are missing) but to get CGI.pm, you’ll need to run:

yum install perl-CGI

So this is a plea to people who are running web hosting services using Red Hat style Linux distributions. Please ensure that your servers are running a complete Perl installation by running the “yum” command above.

All of which brings me to this blog post that Marc Lehmann wrote a couple of days ago. Marc found a web site which no longer worked because it had been moved to a new server which had a newer version of Perl – one that didn’t include CGI.pm. Marc thinks that the Perl 5 Porters have adopted a cavalier approach to backward compatibility and that the removal of CGI.pm is a good example of the problems they are causing. He therefore chose to interpret the problems this site was having as being caused by p5p’s approach to backward compatibility and the removal of CGI.pm.

This sounded unlikely to me. As I said above, it would be surprising if any web hosting company was using 5.22 at this point. So, I did a little digging. I found that the site was hosted by BlackNight solutions and that their web says that their servers run Perl 5.8. At the same time, Lee Johnson, the current maintainer of CGI.pm, got in touch with the web site’s owner who confirmed what I had worked out was correct.

Later yesterday I had a conversation with @BlackNight on Twitter. They told me that their hosts all ran Cloud Linux (which is based on RHEL) and that new servers were being provisioned using Cloud Linux 6 (which is based on RHEL 6).

So it seems clear what has happened here. The site was running on an older server which was running Cloud Linux 5. That includes Perl 5.8 and predates Red Hat removing CGI.pm from the “perl” RPM. It then moved to a new host running Cloud Linux 6 which is based on RHEL 6 and doesn’t include CGI.pm in the default installation. So what the site’s owner said is true, he moved to a new host with a newer version of Perl (that new version of Perl was 5.10!) but it wasn’t the new version of Perl that caused the problems, it was the new version of the operating system or, more specifically, the change in  the way that Red Hat (and its derivatives) packaged Perl.

Marc is right that when Perl 5.22 hits the web hosting industry we’ll lose CGI.pm from a lot a web servers. You can make your own mind up on how important that is and whether or not you share Marc’s other opinions on how p5p is steering Perl. But he’s wrong to assume that, in this instance, the problem was caused by anything that p5p have done. In this instance, the problem was caused by Red Hat’s Perl packaging policy and was compounded by a hosting company who didn’t know that upgrading their servers to Cloud Linux 6 would remove CGI.pm.

RHEL 6 was released five years ago. I suspect it’s pretty mainstream in the web hosting industry by now. So CGI.pm will already have disappeared from a large number of web servers. I wonder why we haven’t seen a tsunami of complaints?

Update: More discussion on Reddit and Hacker News.

The post The Long Death of CGI.pm appeared first on Perl Hacks.

cpan

WWW-Shorten-3.08

"{^­öœzÚD»!¢»^ž)à²+^
cpan

XML-Feed-0.53

\ÂÒÊwbq«b¢q^yÔ®¦š+
perl hacks

LPW Slides

A more detailed write-up of the LPW will follow in the next few days. But in the meantime, here are the slides to the three talks I gave.

Modern Web Development with Perl from Dave Cross

Conference Driven Publishing from Dave Cross

Improving Dev Assistant from Dave Cross

 

 

The post LPW Slides appeared first on Perl Hacks.

slideshare

Modern Web Development with Perl


The training course I ran at the 2015 London Perl Workshop
slideshare

Improving Dev Assistant


My lightning talk from the 2015 London Perl Workshop
slideshare

Conference Driven Publishing


My talk from the 2015 London Perl Workshop
davblog

Doctor Who Festival

In 2013, to celebrate the 50th anniversary of Doctor Who, the BBC put on a big celebration at the Excel centre in London’s Docklands. They must have thought that it went well as this year they decided to do it all over again at the Doctor Who Festival which took place last weekend. Being the biggest Doctor Who fan I know, I was at both events and I thought it might be interesting to compare them.

Each event ran over three days (Friday to Sunday). I visited both events on the Sunday on the basis that there would be one more episode of the show to talk about. This was particularly important in 2013 when the 50th anniversary special was broadcast on the Saturday night.

Price

Let’s start with the basics. This years event was more expensive than the 2013 one. And the price increases were both large and seemingly random. Here’s a table comparing the prices.

Standard Tardis
Adult Child Family Adult Child Family
2013 £45.00 £20.00 £104.00 £95.50 £44.25 £218.00
2015 £68.00 £32.35 £171.00 £116.00 £52.75 £293.00
Increase 51.11% 61.75% 64.42% 21.47% 19.21% 34.40%

You’ll see that some prices “only” went up by about 20% while others increased by an eye-watering 65%. There’s obviously money to be made in these events. And, equally obviously, Doctor Who fans are happy to pay any price for entrance to these events. I don’t know about you, but those increases over two years where inflation has hovered around 0% scream “rip-off” to me.

You’ll notice that I’ve quoted prices for two different types of ticket. There are standard tickets and “Tardis” tickets. Tardis tickets give you certain extras. We’ll look at those next.

Tardis Tickets

I’ll admit here that I went for the Tardis ticket both times. The big advantage that this ticket gives you is that in the big panels (and we’ll see later how those panels are the main part of the days) the front eight or so tickets are reserved for Tardis ticket holders. So if you have a Tardis ticket you are guaranteed to be close enough to see the people on  the stage. Without a Tardis ticket you can be at the far end of the huge hall where you might be able to make out that some people are on the stage, but you’ll be relying on the big video screens to see what is going on.

To me, that’s the big advantage of the Tardis ticket. Does it justify paying almost double the standard ticket price? I’m not sure. But you get a couple of other advantages. You get a free goodie bag. In 2013, that contained a load of tat (postcards, stickers, a keyfob, stuff like that) that I ended up giving away. This year we got the show book (which was pretty interesting and very nearly worth the £10 they were charging for it) and a t-shirt (which was being sold on the day for £25). So the 2015 goodie bag was a massive improvement on the 2013 one.

Tardis ticket-holders also got access to a special lounge were you could relax and partake of free tea, coffee and biscuits. In 2013 this was in a private area away from the rest of the show. This year it was a cordoned off corner of the main exhibition hall which didn’t seem like quite so much of a haven of calm.

Main Panels

The main structure of the day is made up of three big discussion panels that are held in a huge room. Each panel is run twice during the day, but when you buy your ticket you know which time you’ll be seeing each panel.

Each panel has people who are deeply involved in the show. In 2013 we had the following panels:

This year we had:

Both sets of panels were equally interesting. Having the former Doctors taking apart in the 50th anniversary year made a lot of sense.

Exhibition Hall

The other main part of the event was an exhibition hall where various things were taking place. I think this was disappointing this year. Here are some comparisons:

Sets from the show

As far as I can remember, in 2013 there was only the entrance to Totter’s Yard and the outside of a Tardis. This year there was Davros’ hospital room, Clara’s living room and the outside of a Tardis (although this clearly wasn’t a “real” Tardis – the font on the door sign was terrible). So there were more sets this year, but I rather questioned their description of Clara’s living room as an “iconic” set.

Merchandise

There were a lot of opportunities to buy stuff, but it seemed to me that there were rather fewer stalls there this year. Merchandise seemed to fall into two categories. There was stuff that you would have been better off buying from Amazon (DVDs, board games, books, stuff like that). And there was really expensive stuff. I really can’t justify spending £60 or £80 for incredibly intricate replicas of props from the show or £200(!) for a copy of one of the Doctor’s coats.

There was one big exception to the “cheaper on Amazon” rule. The BBC shop had a load of classic DVDs on sale for £6 each.

In 2013 I bought a couple of postcards. This year I managed to resist buying anything. But I appeared to be rather unusual in that – there were a lot of people carrying many large bags of stuff.

Other Stages

Both years, around the edge of the main hall there were areas where other talks and workshops were taking place. This years seemed slightly disappointing. For example, on one stage in 2013 I saw Dick Maggs giving an interesting talk about working with Delia Derbyshire to create the original theme tune. The equivalent area this year had a group of assistant directors giving a list of the people who work on set when an episode of the show is being made.

In 2013, the centre of this room was given over to an area where many cast members from the show’s history were available for autographs and photos. This year, that’s where Clara’s living room was set up. In fact the four cast members who were in the panel I mentioned above were the only cast members who were involved in this event at all. I realise that it makes more sense for there to be lots of cast members involved in the 50th anniversary celebrations, but surely there were some other current cast members who could have turned up and met their fans.

Also in this hall was an area where the Horror Channel (who are the current home of Classic Doctor Who in the UK) were showing old episodes. There was something similar in 2013, but (like the Tardis lounge) it was away from the main hall. Moving this and the Tardis lounge to the main hall made me think that they were struggling a bit to fill the space.

In Summary

This year’s event was clearly a lot more expensive than the one in 2013 and I think attendees got rather less for their money. All in all I think it was slightly disappointing.

The big panels are clearly the centrepiece of the event and they are well worth seeing. But I think you need a Tardis ticket in order to guarantee getting a decent view. Oh, yes you can get in the ninth row without a Tardis ticket, but you’d be competing with a lot of people for those seats. You’d spend the whole day queuing to stand a chance of getting near the front.

I don’t know what the BBC’s plans for this event are, but it’s clearly a good money-spinner for them and I’d be surprised if they didn’t do it again either next year or in 2017. And the fans don’t really seem to mind how much they pay to attend, so it’ll be interesting to see how the next one is priced.

I think that the big panels still make the event worth attending, but there’s really not much else that I’m interested in. So I’m undecided as to whether I’d bother going again in the future.

Were you are the event? What did you think of it? How much money did you spend in total?

The post Doctor Who Festival appeared first on Davblog.

davblog

Eighteen Classic Albums

A couple of months ago, I wrote a post about a process I had developed for producing ebooks. While dabbling in a few projects (none of which are anywhere near being finished) I established that the process worked and I was able to produce ebooks in various different formats.

But what I really needed was a complete book to try the process on, so that I could push it right through the pipeline so it was for sale on Amazon. I didn’t have the time to write a new book, so I looked around for some existing text that I could reuse.

Long-time readers might remember the record club that I was a member of back in 2012. It was a Facebook group where each week we would listen to a classic album and then discuss it with the rest of the group. I took it a little further and wrote up a blog post for each album. That sounded like a good set of posts to use for this project.

So I grabbed the posts, massaged them a bit, added a few other files and, hey presto, we have a book. All in all it took about two or three hours of work. And a lot of that was my amateur attempts at creating a cover image. If you’re interested in the technical stuff, then you can find all the input files on Github.

There has been some confusion over the title of the book. Originally, I thought there were seventeen reviews in the series. But that was because I had mis-tagged one. And, of course, you only find problems like that after you create the book and upload it to Amazon. So there are rare “first printing” versions available with only seventeen reviews and a different title. Currently the book page on Amazon is still showing the old cover. I hope that will be sorted out soon. I’ll be interesting to see how quickly the fixed version is pushed out to people who have already bought the older edition.

My process for creating ebooks is working well. And the next step of the process (uploading the book to Amazon) was pretty painless too. You just need to set up a Kindle Direct Publishing account and then upload a few files and fill in some details of the book. I’ve priced it at $2.99 (which is £1.99) as that’s the cheapest rate at which I can get 70% of the money. The only slight annoyance in the process is that once you’ve uploaded a book and given all the details, you can’t upload a new version or change any of the information (like fixing the obvious problems in the current description) until the current version has been published across all Amazon sites. And that takes hours. And, of course, as soon as you submit one version you notice something else that needs to be fixed. So you wait. And wait.

But I’m happy with the way it has all gone and I’ll certainly be producing more books in the future using this process.

Currently three people have bought copies. Why not join them. It only costs a couple of quid. And please leave a review.

The post Eighteen Classic Albums appeared first on Davblog.

davblog

How To Travel From London To Paris

Imagine that you want to travel from London to Paris. Ok, so that’s probably not too hard to imagine. But also imagine that you have absolutely no idea how to do that and neither does anyone that you know. In that situation you would probably go to Amazon and look for a book on the subject.

Very quickly you find one called “Teach Yourself How To Travel From London To Paris In Twenty-One Days”. You look at the reviews and are impressed.

I had no idea how to get from London to Paris, but my family and I followed the instructions in this book. I’m writing this from the top of the Eiffel Tower – five stars.

And

I really thought it would be impossible to get from London to Paris, but this book really breaks it down and explains how it’s done – five stars.

There are plenty more along the same lines.

That all looks promising, so you buy the book. Seconds later, it appears on your Kindle and you start to read.

Section one is about getting from London to Dover. Chapter one starts by ensuring that all readers are starting from the same place in London and suggests a particular tavern in Southwark where you might meet other travellers with the same destination. Chapter two suggests a walking route that you might follow from Southwark to Canterbury. It’s written in slightly old-fashioned English and details of the second half of the route are rather sketchy.

Chapter two contains a route to walk from Canterbury to Dover. The language has reverted to modern English and the information is very detailed. There are reviews of many places to stay on the way – many of which mention something called “Trip Advisor”.

Section two is about crossing the channel. Chapter three talks about the best places in Dover to find the materials you are going to need to make your boat and chapter four contains detailed instructions on how to construct a simple but seaworthy vessel. The end of the chapter has lots of advice on how to judge the best weather conditions for the crossing. Chapter five is a beginner’s guide to navigating the English Channel and chapter six has a list of things that might go wrong and how to deal with them.

Section three is about the journey from Calais to Paris. Once again there is a suggested walking route and plenty of recommendations of places to stay.

If you follow the instructions in the book you will, eventually, get to Paris. But you’re very likely to come away thinking that it was all rather more effort than you expected it to be and that next time you’ll choose a destination that it easier to get to.

You realise that you have misunderstood the title of the book. You thought it would take twenty-one days to learn how to make the journey, when actually it will take twenty-one days (at least!) to complete the journey. Surely there is a better way?

And, of course, there is. Reading further in the book’s many reviews you come across the only one-star review:

If you follow the instructions in this book you will waste far too much time. Take your passport to St. Pancras and buy a ticket for the Eurostar. You can be in Paris in less than four hours.

The reviewer claims to be the travel correspondent for BBC Radio Kent. The other reviewers were all people with no knowledge of travel who just happened to come across the book in the same way that you did. Who are you going to trust?

I exaggerate, of course, for comic effect. But reviews of technical books on Amazon are a lot like this. You can’t trust them because in most cases the reviewers are the very people who are least likely to be able to give an accurate assessment of the technical material in the book.

When you are choosing a technical book you are looking for two things:

Most people pick up a technical book because they want to learn about the subject that it covers. That means that, by definition, they are unable to judge that second point. They know how easily they understood the material in the book. They also know whether or not they managed to use that information to achieve their goals. But, as my overstretched metaphor above hopefully shows, it’s quite possible to follow terrible advice and still achieve your goals.

I first came aware of this phenomena in the late 1990s. At the time a large amount of dynamic web pages were built using Perl and CGI. This meant that a lot of publishers saw this as a very lucrative market and dozens of books on the subject were published many of which covered the Perl equivalent of walking from London to Paris. And because people read these books and managed to get to Paris (albeit in a ridiculously roundabout manner) they thought the books were great and gave them five-star reviews. Much to the chagrin of Perl experts who were standing on the kerbside on the A2 shouting “but there’s a far easier way to do that!”

This is still a problem today. Earlier this year I reviewed a book about penetration testing using Perl. I have to assume that the author knew what he was doing when talking about pen testing, but his Perl code was positively Chaucerian.

It’s not just book reviews that are affected. Any kind of technical knowledge transfer mechanism is open to the same problems. A couple of months ago I wrote a Perl tutorial for Udemy. It only covered the very basics, so they included a link to one of their other Perl courses. But having sat through the first few lessons of this course, I know that it’s really not very good. How did the people at Udemy choose which one to link to? Well it’s the one with the highest student satisfaction ratings, of course. It teaches the Perl equivalent of boat-building. A friend has a much better Perl course on Udemy, but they wouldn’t use that as it didn’t have enough positive feedback.

Can we blame anyone for this? Well, we certainly can’t blame the reviewers. They don’t know that they are giving good reviews to bad material. I’m not even sure that we can blame the authors in many cases. It’s very likely that they don’t know how much they don’t know (obligatory link to the Dunning–Kruger effect). I think that in some cases the authors must know that they are chancing their arm by putting themselves forward as an expert, but most of them probably believe that they are giving good advice (because they learned from an expert who taught them how to walk from London to Paris and so the chain goes back to the dawn of time).

I think a lot of the blame must be placed with the publishers. They need to take more responsibility for the material they publish. If you’re publishing in a technical arena then you need to build up contacts in that technical community so that you have people you can trust who can give opinions on your books. If you’re publishing a book on travelling from London to Paris then see if you can find a travel correspondent to verify the information in it before you publish it and embarrass yourselves. In fact, get these experts involved in the process of commissioning process. If you what to publish a travel book then ask your travel correspondent friends if they know anyone who could write it. If someone approaches you with a proposal for a travel book then run the idea past a travel correspondent or two before signing the contract.

I know that identifying genuine experts in a field can be hard. And I know that genuine experts would probably like to be compensated for any time they spend helping you, but I think it’s time and money well-spent. You will end up with better books.

Or, perhaps some publishers don’t care about the quality of their books. If bad books can be published quickly and cheaply and people still buy them, then what business sense does it make to make the books better.

If you take any advice away from this piece, then don’t trust reviews and ratings of technical material.

And never try to walk from London to Paris (unless it’s for charity).

The post How To Travel From London To Paris appeared first on Davblog.

slideshare

Conference Driven Publishing


A talk I gave at the London Perl Mongers Technical Meeting on 13th August 2015
davblog

Writing Books (The Easy Bit)

Last night I spoke at a London Perl Mongers meeting. As part of the talk I spoke about a toolchain that I have been using for creating ebooks. In this article I’ll go into a little more detail about the process.

Basically, we’re talking about a process that takes one or more files in some input format and (as easily as possible) turns them into one or more output formats which can be described as “ebooks”. So before we can decided which tools we need, we should decide what those various file formats should be.

For my input format I chose Markdown. This is a text-based format that has become popular amongst geeks over the last few years. Geeks tend to like text-based formats more than the proprietary binary formats like those produced by word processors. This is for a number of reasons. You can read them without any specialised tools. You’re not tied down to using specific tools to create them. And it’s generally easier to store them in a revision management system like Github.

For my output formats, I wanted EPUB and Mobipocket. EPUB is the generally accepted standard for ebooks and Mobipocket is the ebook format that Amazon use. And I also wanted to produce PDFs, just because they are easy to read on just about any platform.

(As an aside, you’ll notice that I said nothing in that previous paragraph about DRM. That’s simply because nice people don’t do that.)

Ok, so we know what file formats we’ll be working with. Now we need to know a) how we create the input format and b) how we convert between the various formats. Creating the Markdown files is easy enough. It’s just a text file, so any text editor would do the job (it would be interesting to find out if any word processor can be made to save text as Markdown).

To convert our Markdown into EPUB, we’ll need a new tool. Pandoc describes itself as “a universal document converter”. It’s not quite universal (otherwise that would be the only tool that we would need), but it is certainly great for this job. Once you have installed Pandoc, the conversion is simple:

pandoc -o your_book.epub title.txt your_book.md --epub-metadata=metadata.xml --toc --toc-depth=2

There are two extra files you need here (I’m not sure why it can’t all be in the same file, but that’s just the way it seems to be). The first (which I’ve called “title.txt”), contains two lines. The first line has the title of your book and the second has the author’s name. Each line needs to start with a “%” character. So it might look like this:

% Your title
% Your name

The second file (which I’ve called “metadata.xml”) contains various pieces of information about the book. It’s (ew!) XML and looks like this:

<metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dc:title id="main">Your Title</dc:title>
<meta refines="#main" property="title-type">main</meta>
<dc:language>en-GB</dc:language>
<dc:creator opf:file-as="Surname, Forename" opf:role="aut">Forename Surname</dc:creator>
<dc:publisher>Your name</dc:publisher>
<dc:date opf:event="publication">2015-08-14</dc:date>
<dc:rights>Copyright ©2015 by Your Name</dc:rights> </metadata>

So after creating those files and running that command, you’ll have an EPUB file. Next we want to convert that to a Mobipocket file so that we can distribute our book through Amazon. Unsurprisingly, the easiest way to do that is to use a piece of software that you get from Amazon. It’s called Kindlegen and you can download it from their site. Once it is installed, the conversion is as simple as:

kindlegen perlwebbook.epub

This will leave you with a file called “your_book.mobi” which you can upload to Amazon.

There’s one last conversion that you might need. And that’s converting the EPUB to PDF. Pandoc will make that conversion for you. But it does it using a piece of software called LaTeX which I’ve never had much luck with. So I looked for an alternative solution and found it in Calibre. Calibre is mainly an ebook management tool, but it also converts between many ebook formats. It’s pretty famous for having a really complex user interface but, luckily for us, there’s a command line program called “ebook-convert” – which we can use.

ebook-convert perlwebbook.epub perlwebbook.pdf

And that’s it. We start with a Markdown file and end up with an ebook in three formats. Easy.

Of course, that really is the easy part. There’s a bit that comes before (actually writing the book) and a bit that comes after (marketing the book) and they are both far harder. Last year I read a book called Author, Publisher, Entrepreneur which covered these three steps to a very useful level of detail. Their step two is rather different to mind (they use Microsoft Word if I recall correctly) but what they had to say about the other steps was very interesting. You might find it interesting if you’re thinking of writing (and self-publishing) a book.

I love the way that ebooks have democratised the publishing industry. Anyone can write and publish a book and make it available to everyone through the world’s largest book distribution web site.

So what are you waiting for? Get writing. If you find my toolchain interesting (or if you have any comments on it) then please let me know.

And let me know what you’ve written.

The post Writing Books (The Easy Bit) appeared first on Davblog.

cpan

WWW-Shorten-OneShortLink-9.99

=êåŠ{^­öœzÚ5²YÞ
cpan

WWW-Shorten-NotLong-9.99

=êåŠ{^­öœzÚ'¢Ùhž(
cpan

WWW-Shorten-Shorl-1.93

=êåŠ{^­öœzÚ,†Šår‰
slideshare

TwittElection


A Talk from OpenTech 2015 about a tool I wrote for monitoring parliamentary candidates on Twitter during the 2015 UK general election.
flickr

Antsiranana

Dave Cross posted a photo:

Antsiranana

flickr

Antisiranana

Dave Cross posted a photo:

Antisiranana

flickr

Stray Dog in Antisiranana

Dave Cross posted a photo:

Stray Dog in Antisiranana

flickr

Antisiranana

Dave Cross posted a photo:

Antisiranana

sources

Feed Subscribe
OPML OPML

Powered by Perlanet