Audio & Video Capture
This guide will show you how to configure the audio and video capture settings in neko.
Neko uses Gstreamer to capture and encode audio and video in the following scenarios:
- WebRTC clients use the Video and Audio pipelines to receive the audio and video streams from the server.
- The Broadcast feature allows you to broadcast the audio and video to a third-party service using RTMP.
- The WebRTC Fallback mechanism allows you to capture the display in the form of JPEG images and serve them over HTTP using Screencast.
- Clients can share their Webcam and Microphone with the server using WebRTC.
WebRTC Video
Neko allows you to capture the display and encode it in real-time using Gstreamer. The encoded video is then sent to the client using WebRTC. This allows you to share the display with the client in real-time.
There can exist multiple video pipelines in neko that are referenced by their unique pipeline id. Each video pipeline can have its own configuration settings and clients can either choose which pipeline they want to use or let neko choose the best pipeline for them.
All video pipelines must use the same video codec (defined in the codec
setting).
The Gstreamer pipeline is started when the first client requests the video stream and is stopped after the last client disconnects.
capture:
video:
display: "<display_name>"
codec: "vp8" # default video codec
ids: [ <pipeline_id1>, <pipeline_id2>, ... ]
pipelines:
<pipeline_id1>: <pipeline_config>
<pipeline_id2>: <pipeline_config>
...
display
is the name of the X display that you want to capture. If not specified, the environment variableDISPLAY
will be used.codec
available codecs arevp8
,vp9
,av1
,h264
. Supported video codecs are dependent on the WebRTC implementation used by the client,vp8
andh264
are supported by all WebRTC implementations.ids
is a list of pipeline ids that are defined in thepipelines
section. The first pipeline in the list will be the default pipeline.pipelines
is a dictionary of pipeline configurations. Each pipeline configuration is defined by a unique pipeline id. They can be defined in two ways: either by building the pipeline dynamically using Expression-Driven Configuration or by defining the pipeline using a Gstreamer Pipeline Description.
Expression-Driven Configuration
Expression allows you to build the pipeline dynamically based on the current resolution and framerate of the display. Expressions are evaluated using the gval library. Available variables are width
, height
, and fps
of the display at the time of capture.
capture:
video:
...
pipelines:
<pipeline_id>:
width: "<expression>"
height: "<expression>"
fps: "<expression>"
gst_prefix: "<gst_pipeline>"
gst_encoder: "<gst_encoder_name>"
gst_params:
<param_name>: "<expression>"
gst_suffix: "<gst_pipeline>"
show_pointer: true
width
,height
, andfps
are the expressions that are evaluated to get the stream resolution and framerate. They can be different from the display resolution and framerate if downscaling or upscaling is desired.gst_prefix
andgst_suffix
allow you to add custom Gstreamer elements before and after the encoder. Both parameters need to start with!
and then be followed by the Gstreamer elements.gst_encoder
is the name of the Gstreamer encoder element, such asvp8enc
orx264enc
.gst_params
are the parameters that are passed to the encoder element specified ingst_encoder
.show_pointer
is a boolean value that determines whether the mouse pointer should be captured or not.
Example pipeline configuration
- VP8 configuration
- H264 configuration
capture:
video:
codec: vp8
# HQ is the default pipeline
ids: [ hq, lq ]
pipelines:
hq:
fps: 25
gst_encoder: vp8enc
gst_params:
target-bitrate: round(3072 * 650)
cpu-used: 4
end-usage: cbr
threads: 4
deadline: 1
undershoot: 95
buffer-size: (3072 * 4)
buffer-initial-size: (3072 * 2)
buffer-optimal-size: (3072 * 3)
keyframe-max-dist: 25
min-quantizer: 4
max-quantizer: 20
lq:
fps: 25
gst_encoder: vp8enc
gst_params:
target-bitrate: round(1024 * 650)
cpu-used: 4
end-usage: cbr
threads: 4
deadline: 1
undershoot: 95
buffer-size: (1024 * 4)
buffer-initial-size: (1024 * 2)
buffer-optimal-size: (1024 * 3)
keyframe-max-dist: 25
min-quantizer: 4
max-quantizer: 20
capture:
video:
codec: h264
ids: [ main ]
pipelines:
main:
width: (width / 3) * 2
height: (height / 3) * 2
fps: 20
gst_prefix: "! video/x-raw,format=I420"
gst_encoder: "x264enc"
gst_params:
threads: 4
bitrate: 4096
key-int-max: 15
byte-stream: true
tune: zerolatency
speed-preset: veryfast
gst_suffix: "! video/x-h264,stream-format=byte-stream"
Gstreamer Pipeline Description
If you want to define the pipeline using a Gstreamer pipeline description, you can do so by setting the gst_pipeline
parameter.
capture:
video:
...
pipelines:
<pipeline_id>:
gst_pipeline: "<gstreamer_pipeline>"
Since now you have to define the whole pipeline, you need to specify the src element to get the video frames and the sink element to send the encoded video frames to neko. In your pipeline, you can use {display}
as a placeholder for the display name that will be replaced by the actual display name at runtime. You need to set the name
property of the sink element to appsink
so that neko can capture the video frames.
Your typical pipeline string would look like this:
ximagesrc display-name={display} show-pointer=true use-damage=false ! <your_elements> ! appsink name=appsink"
See documentation for ximagesrc and appsink for more information.
Example pipeline configuration
- VP8 configuration
- H264 configuration
capture:
video:
codec: vp8
ids: [ hq, lq ]
pipelines:
hq:
gst_pipeline: |
ximagesrc display-name={display} show-pointer=true use-damage=false
! videoconvert
! vp8enc
target-bitrate=3072000
cpu-used=4
end-usage=cbr
threads=4
deadline=1
undershoot=95
buffer-size=12288
buffer-initial-size=6144
buffer-optimal-size=9216
keyframe-max-dist=25
min-quantizer=4
max-quantizer=20
! appsink name=appsink
lq:
gst_pipeline: |
ximagesrc display-name={display} show-pointer=true use-damage=false
! videoconvert
! vp8enc
target-bitrate=1024000
cpu-used=4
end-usage=cbr
threads=4
deadline=1
undershoot=95
buffer-size=4096
buffer-initial-size=2048
buffer-optimal-size=3072
keyframe-max-dist=25
min-quantizer=4
max-quantizer=20
! appsink name=appsink
capture:
video:
codec: h264
ids: [ main ]
pipelines:
main:
gst_pipeline: |
ximagesrc display-name={display} show-pointer=true use-damage=false
! videoconvert
! x264enc
threads=4
bitrate=4096
key-int-max=15
byte-stream=true
tune=zerolatency
speed-preset=veryfast
! video/x-h264,stream-format=byte-stream
! appsink name=appsink
WebRTC Audio
Only one audio pipeline can be defined in neko. The audio pipeline is used to capture and encode audio, similar to the video pipeline. The encoded audio is then sent to the client using WebRTC.
The Gstreamer pipeline is started when the first client requests the video stream and is stopped after the last client disconnects.
capture:
audio:
device: "audio_output.monitor" # default audio device
codec: "opus" # default audio codec
pipeline: "<gstreamer_pipeline>"
device
is the name of the pulseaudio device that you want to capture. If not specified, the default audio device will be used.codec
available codecs areopus
,g722
,pcmu
,pcma
. Supported audio codecs are dependent on the WebRTC implementation used by the client,opus
is supported by all WebRTC implementations.pipeline
is the Gstreamer pipeline description that is used to capture and encode audio. You can use{device}
as a placeholder for the audio device name that will be replaced by the actual device name at runtime.
Example pipeline configuration
capture:
audio:
codec: opus
pipeline: |
pulsesrc device={device}
! audioconvert
! opusenc
bitrate=320000
! appsink name=appsink
Broadcast
Neko allows you to broadcast out-of-the-box the display and audio capture to a third-party service. This can be used to broadcast the display and audio to a streaming service like Twitch or YouTube, or to a custom RTMP server like OBS, Nginx RTMP module, or MediaMTX.
The Gstreamer pipeline is started when the broadcast is started and is stopped when the broadcast is stopped regardless of the clients connected.
capture:
broadcast:
audio_bitrate: 128 # in KB/s
video_bitrate: 4096 # in KB/s
preset: "veryfast"
pipeline: "<gstreamer_pipeline>"
url: "rtmp://<server>/<application>/<stream_key>"
autostart: true
The default encoder uses h264
for video and aac
for audio, muxed in the flv
container and sent over the rtmp
protocol. You can change the encoder settings by setting a custom Gstreamer pipeline description in the pipeline
parameter.
audio_bitrate
andvideo_bitrate
are the bitrate settings for the default audio and video encoders expressed in kilobits per second.preset
is the encoding speed preset for the default video encoder. See available presets here.pipeline
when set, encoder settings above are ignored and the custom Gstreamer pipeline description is used. In the pipeline, you can use{display}
,{device}
and{url}
as placeholders for the X display name, pulseaudio audio device name, and broadcast URL respectively.url
is the URL of the RTMP server where the broadcast will be sent. This can be set later using the API if the URL is not known at the time of configuration or is expected to change.autostart
is a boolean value that determines whether the broadcast should start automatically when neko starts, works only if the URL is set.
Example pipeline configuration
- X264 configuration
- NVENC H264 configuration
capture:
broadcast:
pipeline: |
flvmux name=mux
! rtmpsink location={url}
pulsesrc device={device}
! audio/x-raw,channels=2
! audioconvert
! voaacenc
! mux.
ximagesrc display-name={display} show-pointer=false use-damage=false
! video/x-raw,framerate=28/1
! videoconvert
! queue
! x264enc bframes=0 key-int-max=0 byte-stream=true tune=zerolatency speed-preset=veryfast
! mux.
capture:
broadcast:
pipeline: |
flvmux name=mux
! rtmpsink location={url}
pulsesrc device={device}
! audio/x-raw,channels=2
! audioconvert
! voaacenc
! mux.
ximagesrc display-name={display} show-pointer=false use-damage=false
! video/x-raw,framerate=30/1
! videoconvert
! queue
! video/x-raw,format=NV12
! nvh264enc name=encoder preset=low-latency-hq gop-size=25 spatial-aq=true temporal-aq=true bitrate=2800 vbv-buffer-size=2800 rc-mode=6
! h264parse config-interval=-1
! video/x-h264,stream-format=byte-stream,profile=high
! h264parse
! mux.
This configuration requires Nvidia GPU with NVENC support and Nvidia docker image of neko.
Screencast
As a fallback mechanism, neko can capture the display in the form of JPEG images and the client can request these images over HTTP. This is useful when the client does not support WebRTC or when the client is not able to establish a WebRTC connection, or there is a temporary issue with the WebRTC connection and the client should not miss the content being shared.
This is a fallback mechanism and should not be used as a primary video stream because of the high latency, low quality, and high bandwidth requirements.
The Gstreamer pipeline is started in the background when the first client requests the screencast and is stopped after a period of inactivity.
capture:
screencast:
enabled: true
rate: "10/1"
quality: 60
pipeline: "<gstreamer_pipeline>"
enabled
is a boolean value that determines whether the screencast is enabled or not.rate
is the framerate of the screencast. It is expressed as a fraction of frames per second, for example,10/1
means 10 frames per second.quality
is the quality of the JPEG images. It is expressed as a percentage, for example,60
means 60% quality.pipeline
when set, the default pipeline settings above are ignored and the custom Gstreamer pipeline description is used. In the pipeline, you can use{display}
as a placeholder for the X display name.
Example pipeline configuration
capture:
screencast:
enabled: true
pipeline: |
ximagesrc display-name={display} show-pointer=true use-damage=false
! video/x-raw,framerate=10/1
! videoconvert
! queue
! jpegenc quality=60
! appsink name=appsink
Webcam
This feature is experimental and may not work on all platforms.
Neko allows you to capture the webcam on the client machine and send it to the server using WebRTC. This can be used to share the webcam feed with the server.
The Gstreamer pipeline is started when the client shares their webcam and is stopped when the client stops sharing the webcam. Maximum one webcam pipeline can be active at a time.
capture:
webcam:
enabled: true
device: "/dev/video0" # default webcam device
width: 640
height: 480
enabled
is a boolean value that determines whether the webcam capture is enabled or not.device
is the name of the video4linux device that will be used as a virtual webcam.width
andheight
are the resolution of the virtual webcam feed.
In order to use the webcam feature, the server must have the v4l2loopback kernel module installed and loaded. The module can be loaded using the following command:
# Install the required packages (Debian/Ubuntu)
sudo apt install v4l2loopback-dkms v4l2loopback-utils linux-headers-`uname -r` linux-modules-extra-`uname -r`
# Load the module with exclusive_caps=1 to allow multiple applications to access the virtual webcam
sudo modprobe v4l2loopback exclusive_caps=1
This is needed even if neko is running inside a Docker container. In that case, the v4l2loopback
module must be loaded on the host machine and the device must be mounted inside the container.
services:
neko:
...
devices:
- /dev/video0:/dev/video0
...
Microphone
Neko allows you to capture the microphone on the client machine and send it to the server using WebRTC. This can be used to share the microphone feed with the server.
The Gstreamer pipeline is started when the client shares their microphone and is stopped when the client stops sharing the microphone. Maximum one microphone pipeline can be active at a time.
capture:
microphone:
enabled: true
device: "audio_input"
enabled
is a boolean value that determines whether the microphone capture is enabled or not.device
is the name of the pulseaudio device that will be used as a virtual microphone.