Developer Documentation

Low Level API

Switch to scripting

The ODIN Unity SDK provides a low-level API that allows you to create your own voice-chat experience from scratch.

Tip

This manual handles the low-level API and the classes that are provided with the SDK. If you are looking for easy to use prefabs and components that are provided with the SDK, please refer to the High Level API.

Getting Started

The low-level API is designed to give you full control over the voice-chat experience in your game. It is a set of classes that you can use to create your own voice-chat experience from scratch. All starts with the Room class.

Please see the Sample InstanceSampleBasic that is provided with the SDK to see how to use the low-level API. Please note: The sample code below only shows the most important parts of the sample. It skips error handling and validation for the sake of simplicity. Please refer to the sample code for a complete example.

Creating a Room

The Room class is the main class that you will use to create a voice-chat experience in your game. You first create a local Room object and then join it. The server will create the room object on the fly so you don’t need to to do any bookkeeping of the server-side on your own.

// Create a room instance and provide a gateway or server URL, a sample rate and a flag if the room is stereo or not
_Room = Room.Create(Gateway, 48000, false);

Handling Events

Next step is to handle events that are fired by the room. The room will fire events for all important actions that are happening in the room. You can subscribe to these events and handle them in your game.

_Room.OnRoomJoined += Example_OnRoomJoined;
_Room.OnMediaStarted += Example_OnMediaStarted;
_Room.OnMediaStopped += Example_OnMediaStopped;
_Room.OnMessageReceived += Example_OnMessageLog;
Warning

The events are being called in a different thread that the main thread! You need to make sure that you handle the events in a thread-safe way and you need to make sure that any call into the Unity API is dispatched to the main thread.

If you need to call the Unity API, you need to dispatch the call to the main thread. You can create a ConcurrentQueue instance in your class and dispatch the call to the main thread in the Update method of your MonoBehaviour. There are many examples on how to handle these situations in the Unity community. But here is a quick example:

public static readonly ConcurrentQueue<Action> UnityQueue = new ConcurrentQueue<Action>();

private void Example_OnRoomJoined(object sender, ulong ownPeerId, string name, byte[] roomUserData, ushort[] mediaIds, ReadOnlyCollection<PeerRpc> peers)
{
    // Handle the event
    // ...
    
    // If you need to call into the Unity API, you need to dispatch the call to the main thread
    UnityQueue.Enqueue(() =>
    {
        // Call into the Unity API
        // ...
    });   
}

private void Update()
{
    if (UnityQueue.IsEmpty == false)
        while (UnityQueue.TryDequeue(out var action))
            action?.Invoke();
}

Joining a Room

Once you have created the room and subscribed to the events, you can join the room. You just need to create a token and join the room with it.

To join a room, you just need to set the Room.Token property. Room will check if the token has been set and if it has been set, it will join the room.

Warning

Tokens are typically generated by your backend and are used to authenticate the user and to provide the user with the necessary information to join the room. You should never generate tokens in your frontend or client application as this would expose your secret keys to the public. See Access Keys in ODIN for more information on that topic. We have an example on how to generate tokens using a simple NodeJS script in a cloud function: Generating Tokens.

If you want to test ODIN without setting up your own backend, you can use this code to create a token for testing purposes. Please note: The function ( OdinRoom.GenerateTestToken ) is only available in the Unity Editor and should not be used in a production environment.

  var accessKey = "__YOUR_ACCESS_KEY__";
  var roomName = "MyRoom";
  var username = "Username";
  _Room.Token = OdinRoom.GenerateTestToken(roomName, username, 5, AccessKey);

You can create an access key for testing purposes for up to 25 users directly in this web component:

Adding Media

Once you have joined the room, you can add media to the room, i.e. add the users microphone stream to the room. If you want users to be connected to each other immediately, you can add the media right after joining the room.

    private void Example_OnRoomJoined(object sender, ulong ownPeerId, string name, byte[] roomUserData, ushort[] mediaIds, ReadOnlyCollection<PeerRpc> peers)
    {        
        // each room has a limited number of ids reserved for encoders i.e audio input / capture
        if (_Room.AvailableEncoderIds.TryDequeue(out ushort mediaId))
            CreateCapture(mediaId);
        else
            Debug.LogError($"Can not create encoder without a encoder id that is free to use");
    }
    
    // Create a media instance and provide a sample rate and a flag if the media is stereo or not
    public void CreateCapture(ushort mediaId)
    {
        if( _Room == null) return;

        // create a encoder where to send audio data to
        if (_Room.GetOrCreateEncoder(mediaId, out MediaEncoder encoder))
        {
            // created encoders have to be started with a customizable rpc call
            _Room.StartMedia(encoder);
            // set a callback for the MicrophoneReader and add sample effects to the pipeline
            LinkEncoderToMicrophone(encoder);
        }
    }
    
    // Find the microphone reader in the scene and link them to the input of the encoder
    private void LinkEncoderToMicrophone(MediaEncoder encoder)
    {
        UnityQueue.Enqueue(() =>
        {
            // this sample does not set PersistentListener direct or by prefab
            if (_AudioInput && _AudioInput.OnAudioData?.GetPersistentEventCount() <= 0 && _Room != null)
                _AudioInput.OnAudioData.AddListener(ProxyAudio);

            OdinMicrophoneReader microphone = GetComponent<OdinMicrophoneReader>();
            if (microphone != null)
            {
                // optionally add effects to the encoder (Input/Capture)

                // add voice activity detection
                OdinVadComponent vadComponent = microphone.gameObject.AddComponent<OdinVadComponent>();
                vadComponent.Media = encoder;
                // add a microphone boost
                OdinVolumeBoostComponent volumeBoostComponent = microphone.gameObject.AddComponent<OdinVolumeBoostComponent>();
                volumeBoostComponent.Media = encoder;
            }
        });
    }

You’ll need to have a OdinMicrophoneReader component attached to your scripted game object for this to work.

It is important to understand what happens here: Imagine the ODIN as an audio pipeline. The microphone has some output and we need to connect that output to the input of the room. With this code, we do exactly that. We link the output of the microphone to the input of the room. The room will then send the audio data to the server and the server will send the audio data to all other peers in the room.

// Link OdinMicrophoneReader output (OnAudioData) to the input of the room (ProxyAudio)
_AudioInput.OnAudioData.AddListener(ProxyAudio);

Creating a Playback Stream

Now, all users in the room can hear you, but you can’t hear them yet. To hear them, you need to create a playback stream for each media created by every user/peer in the room. You can do this by subscribing to the Room.OnMediaStarted .

    private void Example_OnMediaStarted(object sender, ulong peerId, MediaRpc media)
    {       
        // in default setup the room creates decoders intern automatically on the event
        // get the decoder corresponding to a peer
        if (_Room.RemotePeers.TryGetValue(peerId, out PeerEntity peer))
            if (peer.Medias.TryGetValue(media.Id, out MediaDecoder decoder))
                CreatePlayback(decoder, peer);
    }
    
    public void CreatePlayback(MediaDecoder decoder, PeerEntity peer)
    {
        // EXAMPLE optionally add INTERNAL effects to the decoder (Output/Playback)
        MediaPipeline pipeline = decoder.GetPipeline();
        if (pipeline.AddVadEffect(out _))
            Debug.Log($"added {nameof(VadEffect)} to \"OdinDecoder {decoder.Id}\" of peer {peer.Id}");

        // Odin uses Unity to play the audio
        DispatchCreateAudioSource(decoder, peer);
    }
    
        /// <summary>
    /// Add OdinMedia that handles <see cref="AudioSource"/> and copy data from Odin to <see cref="AudioClip"/>
    /// </summary>
    /// <remarks>optionally <see cref="MediaDecoder.Pop"/> samples can be used with <see cref="AudioClip.SetData"/></remarks>
    private void DispatchCreateAudioSource(MediaDecoder decoder, PeerEntity peer)
    {
        UnityQueue.Enqueue(() =>
        {
            GameObject container = new GameObject($"OdinDecoder {decoder.Id}");
            container.transform.parent = transform;
            OdinMedia mediaComponent = container.AddComponent<OdinMedia>();
            mediaComponent.MediaDecoder = decoder; // set the decoder to copy data from
            mediaComponent.Parent = peer; // the use of OdinMedia requires a parent else it is optional
            mediaComponent.enabled = true;

            // optionally add interal effects wrapped with Unity to the decoder (Output/Playback)
            // for audio pipeline manipulation

            // add a playback volume boost
            OdinVolumeBoostComponent volumeBoostComponent = container.AddComponent<OdinVolumeBoostComponent>();
            volumeBoostComponent.Media = mediaComponent;
            // add a playback mute 
            OdinMuteAudioComponent muteComponent = container.AddComponent<OdinMuteAudioComponent>();
            muteComponent.Media = mediaComponent;
            // see other Effects or build one
            // with CustomEffect (PipelineEffect)
            // or with Unity helper class OdinCustomEffectUnityComponentBase
        });
    }

Of course, you also need to handle the case if users disconnect or remove their medias from the room (i.e. muting themselves). You can do this by subscribing to the Room.OnMediaStopped .

    private void Example_OnMediaStopped(object sender, ulong peerId, ushort mediaId)
    {
        Debug.Log($"Peer {peerId} removed media {mediaId}");

        DispatchDestroyAudioSource($"OdinDecoder {mediaId}");
    }
    
    /// <summary>
    /// Disposing objects is necessary to prevent memory leaks
    /// </summary>
    private void DispatchDestroyAudioSource(string gameObjectName)
    {
        UnityQueue.Enqueue(() =>
        {
            OdinMedia mediaComponent = this.gameObject
                .GetComponentsInChildren<OdinMedia>()
                .FirstOrDefault(component => component.name == gameObjectName);

            if (mediaComponent != null)
                Destroy(mediaComponent.gameObject);
        });
    }

Testing the setup

Just press Play in the Unity Editor to test your basic ODIN integration. The operating system will ask you for permission to use the microphone. You need to build the client and deploy it to a colleague or another machine to test the voice chat integration.

You can also use our free Web Client to test your setup. You’ll just open our Web Client in your browser, set the same access key and room name and you can talk to your Unity client. You can find a guide on how to use the Web Client here.

You should now have simple one-room non-spatial voice chat working. Anyone in the same room can hear and talk to each other. During play and when others are connecting to the same room, you’ll see new GameObjects being created in the hierarchy for peers (they have an OdinPeer component attached to them) and for each peer you’ll see GameObjects with an OdinMedia component being attached.

Implementing 3D spatial voice-chat

Implementing a 3D spatial voice-chat requires a bit more work. It’s basically the same as the 2D non-spatial voice-chat, but, these two things need to be done:

  • First, the spatialBlend of the AudioSource needs to be set to 1.0 which is 3D sound. This enables Unity to calculate the volume and direction of the sound based on the position of the GameObject in the scene.
  • Second, the OdinPeer component (and thus subsequent OdinMedia objects) needs to be attached to a GameObject that is moving in the scene. This is typically the GameObject that represents the player or the avatar of the player.

The actual implementation is different for each game or application and depends on how you have set up your multiplayer framework. Typical multiplayer frameworks like Mirror Networking, Unity Netcode for GameObjects or Photon PUN assign connectionIds to players. You can use these connectionIds to identify the player and attach the OdinPeer component to the GameObject that represents the player.

Odin has their own ids for peers and media, so to some extend you need to map ODINs user ids to (connection) ids in your game. When connecting to a room in ODIN, you can set a user data object that will be sent to all other peers in the room. This user data object can contain the connection id of the player in your game. You can then use this connection id to map the Odin user id to the player in your game.

That’s all there is to do to change 2d non-spatial voice-chat to 3D spatial voice-chat.

Tip

We have a guide on how to implement 3D spatial voice-chat using the Mirror Networking framework with the 1.x version of the Unity SDK, but the principles are the same and can be applied to any multiplayer framework. You can find the guide here: Unity Mirror Integration.

Mixing 2D and 3D voice-chat

In some games or applications, you might want to mix 2D non-spatial voice-chat with 3D spatial voice-chat. As use-case for this is that you have 3D spatial voice-chat for players in a “world” room where every player can hear and talk to each other based on their position in the world, but you could also have a radio room for seperate teams that have 2D non-spatial voice-chat.

Imagine CounterStrike where you have 3D spatial voice-chat for the players in the world, but you also have 2D non-spatial voice-chat for the players in the same team. If a player is talking via radio, only the players in the same team he still can be heared by the enemy in 3D spatial voice-chat. So, it might be a good strategy to quickly sneak up to the enemy and listen to what they try to privately discuss in their radio.

To do that, you create two Room instances in your scene but with different room names. Depending on the room you would have different event handlers to handle 2D voice chat or 3D voice chat.

Next steps

You should now have a basic understanding of how to use the high-level API of the ODIN Unity SDK. You can now start implementing your own voice chat solution in your game or application. Check out the scripting API for more information on the actual functionality.

Switch to scripting