Last week @chadwallacehart shared some javascript code that used the HTML5 drawImage() method to copy a <video> source to various canvas contexts which could then be manipulated. As a result, we are able to slice the external video source into four quadrants for display. The code was great so I decided to integrate the technique into our very basic simpleDemo which I use a lot as a client-side WebRTC frontend interface into PowerMedia XMS. I also wanted to expand from the original four quadrants to nine, increase the resolution received to 720p and lastly recreate a Google hangout type layout. Below is a link to download the entire project but here I step through most of the javascript code as well:

First, we’ll need to have an event listener waiting for the video source to start playing:

video.addEventListener('playing', function () {}

Within the event listener function, we’ll use jQuery to tie the javascript variable video to the HTML video source (remoteVideo) receiving the WebRTC input feed:

var video = $('#remoteVideo')[0];\

Next, we’ll need to create an array that stores each of the canvases – in this snippet, we have nine designated canvases that we’ll use to split the original video source into. Note – c1 through c9 are the <canvas> id variables as designated in the HTML code.

var splitCanvas = [$('#c1')[0], $('#c2')[0], $('#c3')[0], $('#c4')[0], $('#c5')[0], $('#c6')[0], $('#c7')[0], $('#c8')[0], $('#c9')[0]];

To make life easier, we’ll create two variables ‘w’ and ‘h’ that we’ll use to store the width and height measurements for each of the nine quadrants. Note – the incoming video source isn’t exactly symmetrical thus the need for varying dividers.

var w = video.videoWidth / 3.1;

var h = video.videoHeight / 3.05;

Next we’ll create a canvas context and set the dimensions to the native size of the video before saving to the context variable in order to manipulate the images

var context = [];

for (var x = 0; x < splitCanvas.length; x++) {

splitCanvas[x].width = w;

splitCanvas[x].height = h;

context.push(splitCanvas[x].getContext('2d'));

};

Last step is to draw the 9 quadrants from the video source every 33 ms (~30 FPS) using the setInterval and drawImage methods. Note – the interval rate should be adjusted according to the incoming frame rate (saves CPU if it is lower, improves quality if incoming video is higher). As part of my testing, the PowerMedia XMS media being negotiated is set to 30 FPS.  For this snippet, I’m only including the top left quadrant – all other quadrants can be referenced in source code:

setInterval(function () {                                     

context[0].drawImage(video,

0, 0,       //x, y start clipping

w, h,       //x, y  clipping width

0, 0,       //x, y  placement of img in canvas

w, h);      //width, height of placement 

}, 33);

The result is optimized performance as PowerMedia XMS mixes all parties into a single, synchronized downstream to limit client bandwidth and processing requirements while the HTML5/JS canvases decompose this single multi-party stream into individual attendees on the client-side, giving the UX developer complete control over the layout.

~Vince

@vfpuglia