Don't rely on the out_rate vs n_phases to decide on when to use the
inter resampler because the out_rate can change when we activate the
adaptive resampler.
Instead use a boolean that we can set at the start.
If we have in and out rates with a very small GCD, we might end up with
a lot of phases. Limit the number of phases to 1024 and switch to
interpolating mode. 1024 phases is enough to accurately interpolate
from.
Together with the MAX_TAPS limit we will never create a filter
size that overflows 32 bits.
Fixes#5073
A DMA buffer from a DRM device are typically accessed using API related
to a DRM device, e.g. Vulkan or EGL. To create such a context for using
with a PipeWire stream that passed DRM device DMA buffers applications
have so far usually guessed or made use of the same context as the
stream content will be presented. This has mostly been the Wayland
EGL/Vulkan context, and while this has most of the time worked, it's
somewhat by accident, and for reliable operation, PipeWire must be aware
of what DRM device a DMA buffer should be accessed using.
To address this, introduce device ID negotation, allowing sources and
sinks to negotiate what DRM device is supported, and what formats and
modifiers are supported by them.
This will allow applications to stop relying on luck or the windowing
system to figure out how to access the DMA buffers. It also paves the
way for being able to use multiple GPUs for different video streams,
depending on what the sources and sinks support.
Our AVX optimizations are really AVX2 so rename the files and functions
and use the right HAVE_AVX2 and cpu flags to compile and select the
right functions.
Fixes#5072
Add heuristic to resync streams if controller packet completion times
for different streams differ by too much. This likely indicates
controller has lost sync between the streams, and we have to reset
playback.
There's no way to do this properly. The ISO-over-HCI transport is badly
specified in Bluetooth Core Specification. Many controllers have broken
implementation of the current send timestamp read command, so packets
have no identifiers which ISO interval they belong to.
Controllers try to reconstruct the right interval based on
manufacturer-specific heuristics probably based on packet arrival times.
Kernel + USB introduce timing jitter, and playback sometimes desyncs and
packet from some streams are persistently sent some multiple of the SDU
interval off from the intended timing.
Try to determine this from packet completion latencies. This is
somewhat manufacturer specific, tested on Intel & Realtek, hopefully
works on others too.
There are known controller firmware bugs that cause packet completion
reports, mainly for ISO packets, to be missing.
To avoid getting stuck e.g. in ISO queue flushing, we should consider a
packet completed if sufficient time has passed even if controller (and
kernel) don't report it completed. Take 1 s as conservative timeout, the
expected values are some ms.
These firmware bugs also cause kernel to stop sending packets if too
many are left uncompleted, but we cannot detect that.
Update volume state on device set volume notifications.
When one device sends volume notification, CAP specifies volume on other
devices shall be synchronized too.
When session manager emits loopback nodes for profile autoswitch, we
need to indicate them in the Routes.
Otherwise, the port information in Pulseaudio API doesn't account for
them, and some apps (eg GNOME) misbehave, as the loopback node sometimes
doesn't have valid ports.
When using channel maps, the active map should be set using
snd_pcm_set_chmap(). This has to be called when stream is in prepared
state.
Track which of the maps the selected format has set, and set it in
do_prepare().
Setup initial HW volumes for BAP profiles similarly as done for A2DP.
As Client, retain the remote volumes as initial values, and as Server
use our own default volumes.
Also as A2DP Source, use the remote HW volume as initial value, if
available.
In the Client / A2DP Source modes session manager usually restores its
own volumes overriding what we set here.
These keys have not been used for a very long time. Debian code search does
not turn up any users either. There is also no such thing as "libcamera_capability".
These were created based on the `api.v4l2.cap.*` keys, but at the moment they
are not actually applicable to libcamera. So remove them.
Take active rate correction properly into account when dropping data on
overrun resync.
Drop data only for the currently processed stream, after data has been
consumed from it. Make sure the rate correction factor is updated after
this for the next cycle of the stream.
Also fix buffer fill level calculation: the fill level interpolation
should use node rate corr, not clock rate diff, since the calculations
are done in system clock domain. Fix same issue in fractional delay
calculation, and take no resampler prefill into account.
Later, we maybe need some more resampler APIs to avoid such details
leaking in.
Previously, stream could have its old rate correction locked in, and its
fill level would then end up off the target on the next cycle.
Also when we are capable of PLC, it's better to buffer audio at start,
to get a buffer level close to the target initially.
Delay ISO overrun handling one cycle after buffering is complete, so
that any resamplers are filled at that point.
When PLC data was produced due to underrun and decode buffer has reached
target level, drop received audio if we already did PLC for that packet.
It's better to lose some packets than having to resync latency.
When about to underrun when PLC is active, update rate matching before
filling up buffer, so that rate slows down and we do not get stuck in a
continuous underrun where PLC fills data and we drop received packets.
The rate matching filter assumes buffer level for cycle j+1 is
buffer(j+1) = buffer(j) + recv(j) - corr(j+1) * duration
but what we are actually doing is instead
buffer(j+1) = buffer(j) + recv(j) - corr(j-1) * duration
because the correction factor that is computed is not used for the next
cycle, but the one following that. Although the filter is still stable
in theory the extra lag causes oscillations to be damped less.
Fix by using the computed correction factor for the next cycle, as
there's no reason why we'd like to have more lag in rate matching.
This then changes c(j-1) -> c(j) in the assumptions, which turns out to
fix the situation. Fix the filter derivation to match. The filter
coefficients stay as they were, and they are actually exactly correct
also for short averaging times.
In practice, it is observed that ISO RX with quantum 4096 converges to
stable rate, whereas previously the matching retained small
oscillations.
The buffer level number includes the current quantum, so it should not
be subtracted. We do this after recovery from glitch, and this throws
rate matching off.
The level after recovery should also include the resampler delay.
As BAP server, when we can't satisfy BAP presentation delay, just accept
a bigger latency and emit a warning, instead of having broken audio.
Also make sure it works if quantum is forced to a larger value than our
wanted node.latency.
Take other latencies into account when selecting the wanted node.latency
for BAP Server.
Fix up port.params usage vs. user flag, which is not used here.
Reset decode buffer rate matching if we need to PLC due to underrun so
it doesn't get stuck at playback at increased rate if target is too
small.
For better start synchronization, we should wait until all ISO nodes
that are going to be started finish creating ISO io.
Add a separate ready flag for startup that is set when all Acquire
requests are complete.
Add options to control advertised delays supported.
Smaller delay needs smaller node.latency be used, so use 40ms as a
reasonable minimum preferred delay.
The HSP and HFP profiles expect that a device function only as an audio
gateway or as an headset, which is the normal behavior for a headset,
a hands-free car unit or a phone.
In case of a desktop, it can perform both functionalities, but there's
no interest to get them at the same time as the bidirectional audio
is already supported.
I'm not 100% sure if this was breaking SSE41 builds on the official build system (I'm building Pipewire
with a different process), but I suspect it was, because you can't combine these into a single translation
unit to sidestep it without including multiple copies of resample-native-impl.h which isn't desirable.
We sync the filter graph in two places, make a function so that both
places do the same thing.
Make node_reset clear the setup flag so that we don't have to do that
twice.
If the the audioconvert.filter-chain.N property is set early, they will
be added to the active_graphs list but with setup = false. When the node
starts, setup_convert is called, but the graphs aren't added to
filter_graphs. Run the do_sync_filter_graph at the end of setup_convert
to add them.
The current implementation only send the +CIEV:<call>,<active> event
if there's an active modem in ModemManager. This may lead to headset
disconnection as in (1) if the profile is by another application than
telephony one, e.g. a conference application/website.
This commit improves dummy call status update by adding a new
"bluez5.disable-dummy-call" props param in bluez5 device, allowing
external application like WirePlumber to set it dynamically.
(1) https://gitlab.freedesktop.org/pipewire/pipewire/-/issues/1744
Fixes: https://gitlab.freedesktop.org/pipewire/pipewire/-/merge_requests/2606
Parse TMAP / GMAP features from MediaEndpoint:SupportedFeatures and pass
them onto the codec in SelectProperties, so it can determine which
mandatory features the device supports.
Add configuration option for specifying which TMAP / GMAP feature bits
we advertise to remote side.
Although some of these could be determined automatically, for production
systems it's better to have explicit option to specify which ones should
be advertised as this may depend on HW capabilities.
When the dynamic data flag is set on the buffer data, it means the
consumer can deal with any data pointer set on the buffer and we can
simply pass the one from upstream to downstream. If the flag is not set,
we need to copy the buffer data.
See #5009
When we recalculate the headroom we also update the latency in frames.
We should express this latency in the graph rate. This is usually the
rate that is suggested in the target_rate but when we are forcing our
own rate (mostly when using IRQ or when in DSD/IEC mode) we should
ignore that value and use our own rate that we will force instead.
Fixes#4977
Add feedback and feedforward controls to the delay. This makes it
possible to make comb and allpass filters with the delay to build
custom reverb effects.
Otherwise we might end up with partial channels when code doesn't
check the unpositioned flag. It's better to set everything to unknown
when there is a mismatch between channel count and layout.