Screencast website with Socket.IO and Node.JS
Asked Answered
P

2

5

I am trying to realize a screencast of a website without any software required but a browser. It is not neccessary to really screenscast the website. Maybe it would be a good solution to "rebuild" the website with information like browser, resolution of viewport, scrolled pixel, .... It is only for the explanation tour of a website and it functions.

My current solution: The script is making "screenshots" of the website with html2canvas ( http://html2canvas.hertzen.com/ ). Then I transport the screenshot as base64-encoded png-data to the receivers. They decode it and draw it to there websites.

But html2canvas needs about 1 second to generate a canvas (with text-only website). It will need about 5-10 secs to generate it for websites with images. That is to long.

Do you have ideas for other approaches?

Prosit answered 4/10, 2015 at 17:53 Comment(8)
Search for WebRTC, it's made for that.Shipman
@Shipman RTC is for comminication right? The comminication is not my problem i think. Its more the screen capturing...Prosit
w3.org/TR/screen-capture but until it's supported in browsers you probably need a different solution. Maybe apply all styles of the page inline (through getComputedStyle) and transfer the HTML.Shipman
Is the website accessible to the viewer as well? You could just use an iframe and update the location then.Shipman
Hmm, what about using Phantomjs + Browserify ? At least they have a phantomjs.org/screen-capture.html api. Or github.com/nwjs/nw.js . You could create some sort of a "entry page" where people type a url and phantomjs is rendering & catching everything and sending stuff over socket.io to the clients. ?Lion
@Prosit found this wich creates a solution with WebRTC developers.google.com/web/updates/2012/12/…Lion
@FerTo that is exactly what i am looking for.. but they use a chrome extension to realize it.. phantomjs + browserify i have to test for performance..Prosit
Would be interesting to see your solution at the end? Any ideas to create sort of open-source project :) ? I'm already interested ^^Lion
S
5

Have you thought about capturing events on the page and displaying them back on the other side? (maybe with a transparent overlay to stop user interactions)

Once the recorder sends screen size etc, an iframe can be used to display the same webpage on the other side. Then add a event handler to the document and listen to common events like clicks, keypresses etc.

[ 'click', 'change', 'keypress', 'select', 'submit', 'mousedown'].forEach(function(event_name){
    document.documentElement.addEventListener(event_name, function(e){
        // send event to the other side using Socket.IO or web sockets
        console.log(getSelector(e.target), e.type);
    }, true);
});

On the playback site, you can just look for the selector and fire the event. Finding the CSS selector for a element can be a bit tricky but the code below will be a good start.

https://github.com/ebrehault/resurrectio/blob/master/recorder.js#L367

Saucier answered 7/10, 2015 at 6:8 Comment(3)
What do you think about the problem, that difference browser could presentate same html code different?Prosit
Most new browsers render html the same way, recording and re-firing events using elements found by selectors will guarantee the correct event is fired on the correct element even if there are any minor differences. Something to be mindful is internet speeds and render times. eg: an element might take longer to load on the player side and might not be available at the time you replay the event. So pre pending events with waitForElementToBeVisible() and waitForElementToStopMoving() functions might be a good idea. Really cool idea, will be interesting to see how it all works out.Saucier
Also check out this project I'm capturing events to be played back via selinium using a chrome plugin: github.com/chris-gunawardena/test-recSaucier
D
2

What you could consider is to capture the user input events on one end, then simulate it on the other end. This can be done live--turn the mouse and key events to a stream--then send it to the client's simulator. See this article: https://gist.github.com/staltz/868e7e9bc2a7b8c1f754

You can also capture the stream with time-stamps and send it to a data store, this essentially creates an array-like log which gives you one item after the other in a time series. You can then feed this log into a reactive library like RxJS, and have scheduled events play out on the client.

For simulation, there should be a few libraries out there (I imagine jQuery can also work). e.g. http://yuilibrary.com/yui/docs/event/simulate.html

Discrimination answered 8/10, 2015 at 5:21 Comment(0)

© 2022 - 2024 — McMap. All rights reserved.