@vpalmisano/webrtcperf-js
    Preparing search index...

    @vpalmisano/webrtcperf-js

    WebRTC Perf javascript browser library

    GitHub page | Documentation

    A browser library used by the webrtcperf tool to capture the RTC logs and run page inspection/automation. It can also be used stand-alone importing the javascript package into the page before loading the other javascript sources. It contains some utilities to debug the page RTC connections, the getUserMedia and getDisplayMedia, evaluate the end-to-end delay, etc.

    <head>
    <script type="text/javascript" src="https://unpkg.com/@vpalmisano/webrtcperf-js/dist/webrtcperf.js"></script>
    </head>

    The library can be used directly into a regular Google Chome browser session.

    1. Install Tampermonkey
    2. Install the User Script
    webrtcperf.PeerConnections
    
    await webrtcperf.collectPeerConnectionStats(true)
    
    webrtcperf.config.MEDIA_URL = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    // Start audio or video with getUserMedia
    await webrtcperf.startFakeScreenshare()
    webrtcperf.overrides.getDisplayMedia = constraints => Object.assign(constraints, { preferCurrentTab: true })
    // Start a screensharing with getDisplayMedia

    Visit https://webrtc.github.io/samples/src/content/peerconnection/pc1/ with the tampermonkey script activated:

    webrtcperf.config.MEDIA_URL = 'https://commondatastorage.googleapis.com/gtv-videos-bucket/sample/BigBuckBunny.mp4'
    webrtcperf.params.timestampWatermarkVideo = true
    webrtcperf.params.timestampWatermarkAudio = true
    // Output:
    // [webrtcperf-0] [e2e-video-stats] rx text=0-1743947998586 delay=44ms confidence=74 elapsed=73ms
    // [webrtcperf-0] [e2e-audio-stats] rx delay: 71.33333333333326ms rxFrames: 62 rxFramesDuration: 1322.6666666666667ms

    // Access the collected stats:
    webrtcperf.videoEndToEndDelayStats
    webrtcperf.audioEndToEndDelayStats

    The library adds a video timestamp overlay at sender side and it recognizes the video at receiver side using Tesseract.js. The audio track at sender side is mixed with signals generated with the ggwave library; at receiver side, the same library is used do decode the received signals with the encoded sender timestamp. Sender and receiver machine clocks must be syncronized.