Michael Naman, head of creative technology at Weapon7, describes the challenges of building a cross-platform video creation and sharing app for Children in Need
Children in Need is an iconic BBC institution that has brought smiles, and a few tears, to the nation since the early 80s. With its loveable mascot, Pudsey, Children in Need has raised millions of pounds in donations, with the ultimate goal of giving every child in the UK the childhood they deserve.
Children in Need approached Weapon7 with the task of creating a mobile experience that would reflect the spirit of the charity and provide an entirely new platform for donating. It wanted something that would engage with a new, digital savvy, younger audience, and give them a reason to interact on their most prized possession: the mobile phone. Whatever we came up with also needed to live beyond the one big ‘Children in Need’ night and offer a way to get people sharing the charity’s messages way past the television event.
So, the first thing we needed was a relevant concept that gave our audience a bit of a smile and rewarded them for bothering to download our app. After numerous brainstorms and collaborative sessions with Children in Need, we hit upon taking our lead from its iconic spokesperson. Or rather, spokesbear.
Originally introduced in 1985, Pudsey Bear became the official logo of the campaign. A beloved child’s toy, with a bandana over his eye suggesting he’d seen better days, Pudsey is the ideal representation for the cause. But he’s just one bear. We wondered whether we could mobilise every child’s favourite toy to help him spread the word.
To that end, we developed a concept whereby you could use a smartphone’s camera to take a photograph of a toy and add an animated mouth to it; then, using inbuilt recording capabilities, literally give that toy a voice. Naturally, we wanted to steer people towards using this in order to make a spoken appeal on behalf of Children in Need, which could be shared directly from the phone. But the app could obviously be used in a far broader way, to give any object, picture or face a mouth that could say absolutely anything you wanted it to.
It’s estimated that in-app purchases will rise to 64 per cent of total iOS market revenue in 2015. It made eminent sense to use this as a new way to encourage donations. So in addition to being able to record your own message, we wanted to have celebrity voices and their catchphrases available to use, for a nominal amount.
It was a bit of fun, inherently sharable and with a built in facility for generating money for the cause.
All we had to do was build it – for both iOS and Android devices.
Of course, there were considerable technical challenges in bringing it to life. We needed to compile audio and video content on a mobile device – any kind of audio or video manipulation is a costly process both in CPU usage and battery drain.
We first investigated whether there was a cross-platform technology that could accomplish everything we needed, and looked at Adobe AIR for mobile, Titanium and PhoneGap. Unfortunately, while some could do a little of what was required, none offered everything.
The only answer was a native app development for both iOS and Android – which in turn brings its own difficulties. Creating a compelling user experience across both devices with the minimum of reworking is a challenge. From the outset we knew that the user journey for each platform would be different. Each platform has its own way of handling different interactions; for example, the Back button on iOS is usually in the top left of the application, while Android either has a hardware Back button or, on the new versions, a software button at the bottom of the screen.
Another piece of functionality that’s handled differently on Android is sharing. The iOS version has sharing built into the app whereas the Android version uses Intents to handle sharing. Personally, I feel this produces a much nicer experience for the user, because they can share the content with any application that’s registered to handle that specific type of content, not just the ones that are coded in the iOS version. Write three lines of code, share with MIME type video/msvideo and you’re done.
While user journeys can be streamlined for a particular operating system, the APIs available for encoding video and audio manipulation are superior on iOS. Our base Android version was Froyo which represents 12 per cent of the Android market. But only Jelly Bean, the latest version of Android, offers API support for encoding video.
We experimented with compiling FFMPEG in the NDK, but this posed problems with different CPU architectures and would have lengthened development time considerably. The only way to encode the video on Android was to write our own encoder. Luckily Android has a fast, built-in JPEG encoder, with which we were able to create an MJPEG stream with JPEG image frame and raw PCM audio data. Having read the Microsoft AVI documentation, we realised that the header information was incorrect, so we ended up having to decompile AVI files from the digital camera to see what the correct AVI header actually was. From there on it was a simple task of writing the rest of the encoder.
I say simple. Obviously, it took a great deal of effort and a dedicated team to have both versions up and running for the launch night of this year’s Children in Need. But, with a little creativity and a lot of coding, hopefully we’ve helped bring Pudsey’s message to a wider audience. And generated a few in-app purchases for a very good cause.