How to change the phase of a sine wave with PsychPortAudio in realtime?

Its nsamples-1 because the vector starts from 0, not 1

Right.

The phase shift is relative to the pafixedsine reference way, so you’ll always need the same shift to cancel out. Also note that the pafixedsine wave is created via sin(), whereas the pashiftsine wave is actually a cos()ine wave. A cosine is essentially a 90 degree shifted sine, that’s why you need 270 degrees = 180 + 90 degrees instead of the 180 degrees one would expect for cancellation. If you change the sin() into a cos() in line 160, you will see now both are cosine’s, so the shift for cancellation is always 180 degrees.

Everybody has to begin somewhere, but good that my services are worth their money.

Because of lines 129-131, see explanation comment before line 129. Depending on what sampling rate your sound card is using, and what target freq frequency you want, a single period of a sine or cosine wave may not “fit” properly and you’d get a vector loaded into the looping playback buffer that is a sample too short or too long, ergo a slight discontinuity after each period, which may cause audible artifacts and/or phase-drift. The comment explains it, but one needs to find a duration/number of repetitions to fit properly for the given sampling rate. Also what Diederick said.

However, the choice of length of support tries to optimize for minimal length of sound-vector to play in infinite loop mode.

If you know that a sound or trial has a fixed duration of only a few seconds, you could just skip all that math, set wavedur to the wanted playback/trial duration and then skip lines 110 - 132. You’d just set the number of playback repetitions in PsychPortAudio('Start', ..) to 1 for one-time playback, instead of 0 for endless repetition.

This is in fact the better approach if you want to add pamodulator AM modulation slave devices like in BasicAMAndMixScheduleDemo.m to define some AM modulation envelope for gating the sine waves / soft fade-in/out. You can just create and attach modulator slaves to the pafixedsine and pashiftsine virtual audio devices, and then ‘FillBuffer’ the modulator slaves with an envelope function which is also length(support) samples long and defined volume at each sample. Important to load this envelope once into the modulator for pafixedsine (only out audio out channel), but as two-channel into pashiftsine (as it has two audio channels, one for sine one for cosine, which need to get the same envelope applied), ie. use repmat(envelope, 2, 1) instead of envelope.

Btw. if you don’t want to control playback timing of each slave device individually, you can also move the ‘Start’ call for pamaster past all Start calls for the slaves. This way all slaves will just go to “ready for action” and only starting the master will trigger all of them to start synchronously.

The 0.5 is just to scale signal amplitude down to 0.5, so if one calls with targetChannel = 2 parameter, to mathematically mix fixed and shifted wave, the amplitude of both waves combined never exceeds 0.5 + 0.5 == 1 and audio clipping is avoided. No need for these 0.5’s if targetChannel = 1 and the audio superposition happens outside the computer in your ear.

One could have left the 0.5 * out in Line 160, and instead used a PsychPortAudio('Volume', pafixedsine, 0.5); a la line 187. For your relative volume adjustment, that would be the better approach, so you can change loudness dynamically for both fixed and shifted sound output channel.

See above for the Volume function to change levels. Yes you must make sure that going beyond -1 to +1 is avoided in each channel. But that simply means that the ‘Volume’ should never be set higher than 1.0, and maybe a tad less doesn’t hurt. Clipping should be apparent in the visualization window, given its size being chosen so it only fits -1 to +1 waves.

Btw. you can also apply a global volume via PsychPortAudio('Volume', pamaster, globalVolume); as a software globalVolume.

Yes. Choice of Linux should probably give you the most low-level control over physical output volume. On Linux we always take full exclusive low-level control over the soundcard by default, and that excludes the OS messing with volumes etc. on a default configuration. So only the hardware mixer / builtin amplifier of the sound card itself can mess with volume etc.

The terminal application alsamixer allows you low-level control over all available hardware mixer settings, knobs, bells and whistles.

More importantly, there is the amixer command line utility, which you could call from within your script via the system('amixer blah blah blah`); function. man amixer`` for the manual page, but that one allows command line control / scripting of hardware mixer settings, so you can make sure that your sound card is always in the same defined state at the start of your experiment script, no need to fiddle with GUI sliders and volume buttons and such.

I don’t know how much MS-Windows would interfere with the default WASAPI backend or how to control equivalent settings, apart from the regular volume slider. macOS i think is just a hazard in this case, doing random uncontrollable shit in the background, depending on OS version and probably phase of the moon. In the past audio was the only decently working thing on macOS, nowadays i would not make that statement anymore.

In the end however, this is why you have microphones and an oscilloscope for independent checking.

See this gist for a hacked up version of the demo, that roughly does some of what i explained above:

Changed pieces are not indented.

Happy playing.
-mario