Revisiting Old Code
Having started causually looking for a new job it dawned on me that it would be a good idea to update my CV and Linkedin profile with some links to my GitHub. Afterall, when reviewing CV’s for candidates applying for roles if any share any links to publicly accessible source code then I’ll always take a look. Reviewing the code of others is a great way to learn more about how they approach problems and if they practice methodologies like TDD or similar. Before I started circulating my CV around though, I thought it would be wise to spend some time pruning my code and pinning my more complete and interesting projects to my homepage.
In thinking about my interesting projects, I was reminded about some code I wrote quite a few years ago that was a more practical demonstration of using video frames as a source when drawing to a Canvas element. I first came across this feature when HTML5 was still a draft specification but Chrome already had support. The most interesting part of the feature was the ability to slice up video frames and apply 3d transforms. I came across a demo page that played a video and when the user clicked the video it would explode into around 50 seperate pieces, and then slowly pull itself back together. It stuck in my mind, and then about a year later I was called into a meeting at work to discuss how on earth the company that I worked for at the time was going to code support for a complicated customer feature it had committed to.
The customer required a now and next preview bar that actually had live previews of up to 5 channels, and the user could use the up and down arrows to flip the bar to view the previous/next 5 channels. The devices we developed at the time at best could only decode and display 2 video streams, so the second stream would be a ‘mosaic’ video generated server side containing 16 video thumbnails, hence individual thumbnails of video would need to be sliced from the screen and then displayed on the screen in the right location and then have various transforms applied. Once the requirement was presented, various engineers in the room scratched their heads. We’d no support for any form of 3D redendering in the software stack, nor any current APIs to access decoded video frames. But, I recalled the exploding video demo that I’d seen, and with browser ports that already had support for the video element and canvas if we could just get the browser access to the decoded frames then we might just be able to pull this off. I made my pitch, showed the exploding example to the crowd and then asked for a few hours to knock up a proof of concept on a desktop browser just to prove my point, and by the next day our course was set.
And that’s what led me here, I recall that I still had the code for that proof of concept lying around and wondered if it still worked. I recall despite it’s overall simplicity, it looked fairly decent and would make a nice addition to my portfollio if I code deploy it to a publish it in a public location, aka GitHub pages or similar. I found the code in an older backup, and on trying to fire it up it still sort of runs. It seems some behaviour changes in the browser now mean that user interaction is required to start video playing, and the entire thing was hardcoded for a screen resolution of 720p.
Hence, my project for the next week is to modernise this old code and get the old demo to a place where it works as intended and post it publicly for demo, alongside open-sourcing the code on GitHub for anybody that might find it interesting.
Work is happening here.