• Core Us

last modified August 27, 2016 by strypey

Core Us: Developing a Free Code Platform for Interactive Conferences


Existing Code That Could Be Useful:



GNU Telephony (GPL 2 or later): A suite of telephony software supported by the GNU Project

Mumble/ Murmur (New BSD): an 'IRC with voice' style system, developed for in-game chat

SIPML5 (BSD) and WebRTC2SIP: A SIP client written in Javascript, and a WebRTC back-end allowing the browser to be a phone.

YATE (GPL v2): Internet telephony engine designed to be easily extensible.


HasciiCam (C, GPL?): Uses the AA-Lib library to convert video from a camera (eg web cam) into ASCII text, which can be transferred across a network and displayed without much lower use of resources than a full-data video stream. 

Jitsi VideoBridge (LGPL, switching to Apache license): video relaying server for multiparty conferences.


BigBlueButton (LGPL): voice/video web conferencing server developed for distance education. Currently depends on Flash, but has a plan for full transition to HTML5 and WebRTC.

Drucall: A module to integrate WebRTC into Drupal sites using a specialised SIP server as the back-end

Eventstreamr (GNU AGPL): "single and multi room audio visual stream management"

Jitsi Meet (MIT, switching to Apache license): WebRTC client for text, voice and video conferencing, encrypted by default, with desktop and presentation sharing.

OpenCast (Educational Community License 2.0, a variant of Apache 2.0): Java/ HTML system developed for use in recording and streaming university lectures by the Apero Foundation.

Palava.tv: Uses the Palava-machine (Ruby) and Palava-client (CoffeeScript) listed on the FSF Free Software Directory.

RetroShare (components under a range of libre licenses): A P2P communications client which can support both voice and video chat

Ring: cross-platform (desktop and mobile), P2P text, voice, and video chat client, now in beta

RocketChat (MIT): a web-based video conferencing server billed as a "Slack-like online chat, built with Meteor", that appears to support WebRTC.

Signal (client apps: GPLv3 server: AGPLv3): Developed by Open Whisper Systems, Signal consists of apps for Android and iOS and a routing server, end-to-end encrypted by default.

Subrosa (GPLv3, discontinued): WebRTC client/server for text, voice and video conferencing, encrypted by default.

Talky.io: A WebRTC platform which has released a number of elements of their stack as free code including SimpleWebRTC and OTalk (although there are still some proprietary bits in the stack).

WebRTC: An open standard developed by Google, Mozilla, and Opera for realtime voice and video conversation, which is being standardized by the W3C. Some documentation on how to implement can be found on WebRTC Quick Start.

Live mixers

dvswitch (GNU GPL): an audio-visual mixing system that depends on LibAV

gst-switch (GNU GPL): written in C and uses GStreamer

snowmix (GNU GPLv3)

Voctomix (Expat or "MIT") a live video mixer wrapped around GStreamer (GNU LGPL), developed by the C3VOC in response to their frustrations with dvswitch, snowmix and gst-switch


Jingle (open protocol extending XMPP): designed for one-on-one voice chat

Muji (open protocol extending Jingle): allow for multi-user voice chat 

SIP (Session Initiation Protocol): a server/client protocol mainly used for voice chat

Tox (C, GPLv3): encrypted, cross-platform, peer-to-peer, text, voice, and video chat, most client software still in alpha

XMPP (eXtensible Messaging and Presence Protocol): client/server protocol for Instant Messaging (short emails in realtime) based on Jabber


If we are going to stream, store and replay audio and video from conferences, we don't want to use patent-encumbered, proprietary formats like MP3, and H.264. Instead, we want to use patent-free/ royalty-free multimedia file formats, developed by groups like Xiph.org.

Some existing options are:

  • Opus - developed by Xiph (replaces Speex
  • Vorbis - developed by Xiph, lossy codec (equivalent to MP3)
  • FLAC - developed by Xiph, lossless codec


  • Theora - developed by Xiph
  • Dirac - created by BBC Research, "low-resolution web content to broadcasting HD and beyond, to near-lossless studio editing"
  • VP8/ VP9 - bought by Google and released as a royalty-free open codec


Formats which allow video, audio, subtitles and other components in different formats to be one included in one file

  • Ogg - developed by Xiph
  • Matroska - developed by the Matroska organisation in France
  • WebM - developed by Google, uses a fork of the Matroska container, VP8 (or 9) video, Vorbis or Opus audio, and WebVTT for text

      Peer-to-Peer Network:

      Whatever end-user applications and codecs are used, it seems reasonable that any conferencing system will be more scalable if it uses some kind of P2P network, where everyone who joins the conference contributes some of their own computing resources to the networking effort. This is how Skype originally worked, before Microsoft acquired it and moved it to a purely client/server model. Some existing open P2P network protocols that could be used:

      FreeNet (GPL): uses standard IP networking in an attempt to create an uncensorable alternative to the world wide web, 

      GNUNet (GPL): "GNU's Framework for Secure Peer-to-Peer Networking", 

      BitTorrent Live: A system for streaming live events using the same P2P swarming principles underlying BitTorrent downloads

      Retroshare (GPL):  "using a web-of-trust to authenticate peers and OpenSSL to encrypt all communication... provides filesharing, chat, messages, forums and channels

      Other Resources:

      Chatting, Audio and Video Calls: A document by RiseUp Labs, the folks behind We.Riseup.net

      74 free code telephony projects from 2007


      To assemble user-friendly software systems for replacing in-person conferences with online conferences, to reduce the resource costs of conferencing, and increase opportunities for participation.

      Initial Targets

      1. Voice/Video Conference calls (for which people currently use things like Skype)
      2. Live streaming of presentations to remote audiences (audio and/or video)
      3. Recording presentations (audio and/or video)
      4. Streaming presentations after-the-fact (audio and/or video)

        What Have Tech Conferences Used Recently?

        At least the last three of these functions seems to have been achieved using an entirely free code stack when Edward Snowden, speaking from Russia, gave a keynote address at the 2016 LibrePlanet. I tried to find the documentation that apparently exists of what they used, but all I could find as of 26/03/2016 was this LibrePlanet wiki page on Skype replacements. Depressingly, they've already gathered a lot of the same information I have, as have Wikipedia and others, and nobody seems any closer to solving the problem. Thanks to Zak of the FSF, who emailed me the links to the streaming documentation for LibrePlanet 2016, and for 2015 which he says has more detailed information. The tools used differ greatly, with the 2015 page featuring IceCast 2 (GNU GPLv2), GStreamer (GNU LGPLv2), and Jack Audio Server (GNU GPLv2 or later), while the 2016 page highlights OpenBroadcaster 2 (the "obs-studio" cross-platform rewrite, most components appear to be under GNU GPLv2 or later), Jitsi Meet (Apache 2.0). There's also ABYSS (GNU GPLv3 or later), which was developed by David Testé for LibrePlanet under the name libre-streamer. According to the FSD entry it was used in both the 2015 and 2016 conferences, but no further development seems to have been done since March, 2016.

        There was also a talk on conference recording by Joel Addison at Linux.conf.au 2016, which covers goal 3 and to some degree goal 4 (upload to YouTube and file mirrors), and talks about Eventstreamr (GNU AGPL), MoviePY, and Zookeeper (a conference management system?), and Voctomix (a C3VOC project). The most exciting tip in Addison's talk is the TimVideos.us meta-project, which has the same initial targets as this project, but actually has some free code and free hardware design projects in progress and in use. Addison also mentions dvswitch (GNU GPL), which has been used at DebConf, but depends on a library called LibAV, which Addison says is poorly supported. The dvswitch system is also discussed in a 2013 talk by Mark Ellem, which also mentions OpenCast.


          Since the 90s and the early days of Indymedia and counter-summits held as part of the activist response to corporate globalization, I've had a vision of using internet technologies to enable people from around the planet to attend globally important conferences, both as audiences for talks, and as active participants in workshops and discussions. 

          This vision was partly motivated by the limitations of in-person participation - only so many people can fit in a room to hear a talk by a knowledgeable and inspiring speaker. Also, as a resident of a Pacific island, I am very aware of the high financial costs of attending conferences in person, which limits participation for those who live outside of the main hubs of international travel, especially in less privileged countries. In-person conferences are therefore biased towards the interests and priorities of those with privileged access to money. 

          While it's true that computers and internet access cost money, international flights cost exponentially more. Spending money on a computer is a one-off expense which creates a re-usable resources, available for a wide range of other purposes, including future conferences. To attend in person, flights must be purchased again every time. Even one computer in a community, connected to any old amplifier and speakers, allows the whole community to listen to conference talks, and one microphone potentially allows anyone who hears a talk to ask a question. When money is spent on flights, somebody gets to participate, and everyone else doesn't.

          Most importantly though, my vision aimed to address the resource costs and inefficiency of travelling long distances for short events. Both long haul and short haul flights currently make a massive contribution to fossil fuel use and carbon emissions, and making long trips by land/ sea is time-consuming and exhausting, and still has significant resource costs. I truly believe that the planet cannot afford for us to have as many global conferences as we need to facilitate the transition to a regenerative future, if we carry on having them in-person. 

          It may be impossible to fully replace the feeling of sitting in a room with a large group of fellow human beings, but most of us live in towns and cities, surrounded by other human beings. There are opportunities to gather in large groups with people who live a bit nearer to us.

          Conference Specific Needs

           While there are now a plethora of free code projects aimed at informal chat, whether one-one-one or multi-user, there are very few projects (if any) which address the more specific needs of conference events, such as:

          • support for a speaking order to avoid the never-ending "no you go", and long awkward pauses
          • facilitator(s) with special "moderator" privileges (as in an IRC channel)
          • timekeeping
          • note-taking, although some projects support integration with a note-taker like EtherPad, and sharing desktops and presentations
          • a variety of meeting formats; talks, Q&A session, workshops, facilitated discussion, brainstorming sessions, open discussions
          • smooth user-experience and maximizing opportunities for participation across multiple use situations where some users may be gathered in-person, possibly in large groups, while others are connecting entirely by the software
          • translation for multi-lingual gatherings; auto-subtitling as people speak, support for an inset window for a sign language interpreter etc 

          Ambitious Big Picture Targets

          1. Allow the user at home to 'mix' what they see, choosing between feeds from all available cameras and the computer displaying the slides, with preset options to do things like inset the speaker in the corner of the slide display
          2. Using open world 3D graphic platforms like WorldForge or Ryzom to simulate a conference environment, allowing normal conversation by automated use of Mumble channels to keep users nearby each other in the same conversational 'room'.
          3. Use of webcams to map photorealistic or stylized face of user onto avatar for close conversations.
          4. Full 3D immersion, with free code virtual reality engines (wrap-around 3D graphics and stereo sound), allowing full simulation of being at the conference venue, with instant teleport between areas, or use of portal doorways, where each room has one doorway, which can connect to any other door in the venue.
          5. Use only hardware based on free hardware designs such as the Numato Opsis and Elphel cameras.