From: jungledmnc on 14 Apr 2010 11:44 I have implemented phase vocoder pitch shifting, started from Bernsee's code and tweaked it, but basically it works pretty much the same way. I ran into a trouble with stereo sounds however - since each channels is different, the "output phase accumulator" used in synthesis step gets different, which obviously results in "wider" output. When you then plug in a monophonic source, it gets even worse, because the phase accumulator is already filled with different values for each channel. So is there a way to somehow keep the stereo field intact, or at least limit the effect? Thanks in advance.
From: robert bristow-johnson on 14 Apr 2010 12:18 On Apr 14, 11:44 am, "jungledmnc" <jungledmnc(a)n_o_s_p_a_m.gmail.com> wrote: > I have implemented phase vocoder pitch shifting, started from Bernsee's > code and tweaked it, but basically it works pretty much the same way. I ran > into a trouble with stereo sounds however - since each channels is > different, the "output phase accumulator" used in synthesis step gets > different, which obviously results in "wider" output. When you then plug in > a monophonic source, it gets even worse, because the phase accumulator is > already filled with different values for each channel. > > So is there a way to somehow keep the stereo field intact, or at least > limit the effect? apply the same phase shift to both channels. it might not be the optimal phase shift (for glitch-free splicing between frames) for either, but if you think about it, you might be able to come up with a good compromize. it also means that the frequency groups would have to be common between the channels. i have no idea if this would work or not, but maybe convert LR into MS and operate on the M and S signals and convert back to LR when you're done. rots o' ruk. r b-j
From: jungledmnc on 15 Apr 2010 09:47 >apply the same phase shift to both channels. it might not be the >optimal phase shift (for glitch-free splicing between frames) for >either, but if you think about it, you might be able to come up with a >good compromize. it also means that the frequency groups would have >to be common between the channels. > >i have no idea if this would work or not, but maybe convert LR into MS >and operate on the M and S signals and convert back to LR when you're >done. > >rots o' ruk. > >r b-j The LR -> MS idea is interesting indeed, though probably will cause troubles on several signals. Actually both ideas have the same trouble - they would work on mono signals, but imagine a stereo, where pitch in both channels is slightly different (for example as a result of a chorus effect, which is actually based on it). This would probably cancel this difference and create mono output (or almost), wouldn't it? jungledmnc
From: robert bristow-johnson on 15 Apr 2010 12:21 On Apr 15, 9:47 am, "jungledmnc" <jungledmnc(a)n_o_s_p_a_m.gmail.com> wrote: > >apply the same phase shift to both channels. it might not be the > >optimal phase shift (for glitch-free splicing between frames) for > >either, but if you think about it, you might be able to come up with a > >good compromize. it also means that the frequency groups would have > >to be common between the channels. > > >i have no idea if this would work or not, but maybe convert LR into MS > >and operate on the M and S signals and convert back to LR when you're > >done. > .... > > The LR -> MS idea is interesting indeed, though probably will cause > troubles on several signals. Actually both ideas have the same trouble - > they would work on mono signals, but imagine a stereo, where pitch in both > channels is slightly different (for example as a result of a chorus effect, > which is actually based on it). This would probably cancel this difference > and create mono output (or almost), wouldn't it? consider what the stereo imaging is in the first place (no processing at all) with your described situation. the relative phase between the two channels (and therefore the perceived sound field location) will be changing in time. do you want the rate of that change to speed up as you pitch the tone up or not? then you have to do something different, in the two cases. in other words, do you want the Hz difference between the tones in the two channels to speed up as the average Hz is raised or not? to help you think about it a little, remember that 2 * cos(w1*t) * cos(w2*t) = cos((w1+w2)*t) + cos((w1-w2)*t) think about that a little bit. and consider what behavior you want your pitch shifter to do for the case of two identical tones (but slightly detuned from each other) in the L and R channels. r b-j
|
Pages: 1 Prev: Deconvolution Next: Daubechies-4 wavelet transform in MATLAB |