I'm using ffmpeg to record screencasts with this command:
ffmpeg -f alsa -ac 2 -i default -f x11grab -r 15 -s 1280x720 -i :0.0+0,17 -acodec pcm_s16le -vcodec libx264 -preset ultrafast -threads 0 -y video.mkv
It works really well for cases where I only need to record my voice ("Mic" capture works). However, there are situations where I would like to record both what I am saying, and what I am hearing. So, my question:
Is there a way to create a virtual ALSA capture device, which will provide the desired "mic + output" stream?
Last edited by Goran (2012-09-17 02:33:32)
Or maybe you can create your mkv file with two audio tracks, one track from each source? That is what I would want to do in this case. It is often better to be able to adjust the level of each track independently later when editing and then you can mix them into one track.
Not that I know the answer of how to do it, I really do want to know it too... But I have som ideas to try.
You can start with checking the options in alsamixer and see if it is possible to enable both inputs at the same time.
Perhaps you can add some options to alsa in ffmpeg and then add another alsa source for second audio track with other options?
Or maybe there is some output that already is mixed if that is what you prefer (and adjust levels during live recording with alsamixer).
Edit: https://wiki.archlinux.org/index.php/Al … .28dmix.29
Maybe dmix can be used to mix them?
Or maybe do something with ~/.asoundrc
https://wiki.archlinux.org/index.php/Al … ure_Device
Last edited by ronnylov (2012-08-17 15:24:44)
I suggest that you keep doing what you're doing but run another command at the same time to capture system sound. Maybe the snd-aloop method? Once you have this extra audio file that you want to merge into a video, you can use "mkvmerge".
6EA3 F3F3 B908 2632 A9CB E931 D53A 0445 B47A 0DAB
Great things come in tar.xz packages.
Right, but to do that I would have to create a virtual ALSA capture source, because I don't have a default "system capture" source.
Here's some info:
This can be achieved in several ways:
Physically loop back the sound output into the Line In socket.
Enable playback loopback using the mixer. Most cards have the ability to loop back output channels into input channels. If this is the case the output channels will have a capture control. This is often not displayed by default in alsamixer and one has to press F5 to see all the capture controls. Sometimes output-to-input routing is done globally with a capture control associated with a single channel, usually called Mix.
Using the file plugin (possibly in conjuction with the copy plugin) to route the output of an ALSA application to a file.
this record "what you hear" thing seems to be something like the "holy grail" of alsa-sound. Most integrated sound chips do not support hardware mixing, doing physical loopback or using the alsa file plugin doesn't really satisfy the needs of most users i think. Normally you would use a sound server at this point (jack, pa).
I had this idea about doing it "pure alsa" but did not get it working, maybe you can help me out, and tell me, if it's complete nonsense or if it might work somehow. Here's the pseudocode of the idea, you need to load the snd-aloop module, which is nowadays included in a standard arch installation (see here):
pcm.main type asym playback.pcm out capture.pcm in pcm.in type multi slaves.a.pcm Loopback1,0 slave.b.pcm dsnoop pcm.out type multi slaves.a.pcm Loopback0,0 slave.b.pcm dmix
Don't know if this is self-explanatory (or bullshit), so the idea is: whenever i play something (like aplay -D main sound.wav) it is send to pcm.out, which routes it to the default dmix plugin (to be able to hear it) and to the loopback device. If i record something (like arecord -D main ...) it grabs the default dsnoop plugin (e.g. for mic capture) and the output from the loopback device (my sound.wav in this case). This is just the general scheme, i know, to make it more convenient, you probably had to define your Loopback slaves in a more sophisticated way like
pcm.in ... slaves.a.pcm loop_in ... pcm.loop_in type dsnoop slave.pcm Loopback1,0
or so. As said before, i didn't get something like this to work, any opinions about that?
Edit: sorry, totally overlooked, that ConnorBehan already mentioned the aloop thing...
Edit2: wow, just read the thread mentioned by ConnorBehan, this guy (kazuo) is doing something very similar, so it should work somehow
Last edited by masutu (2012-08-18 23:52:29)
I'm trying to record my desktop sound as well.
sudo modprobe snd_aloop
This creates hw:Loopback,0 and hw:Loopback,1. Great. Surely I can point ffmpeg to record these instead, but how do I start feeding sound into them? Must I configure each application to play out to the loopback individually? If so, it seems like a pain, and maybe I should just take the time to set up pulse instead.
There is a program in alsa-utils called alsaloop. I'm not exactly sure what it does, but when I run it I can't actually hear sound myself, so I need to figure a way to listen in to the sound while it is running. I'm guessing it is redirecting the sound to the loop.
ffmpeg -f alsa -ac 2 -i hw:Loopback,1...
I was hoping alsaloop was feeding the sound to Loopback, 0, and all I needed to do was record Loopback, 1, but that doesn't work.
Anyone have any ideas?
Last edited by rodyaj (2012-08-19 07:07:48)
I'm trying this again - still no luck.
This post by pigiron seemed promising, but even though I got the hda-analyzer working, I have no idea what to actually change.
There are many different nodes visible in the graph for my Realtek ALC663 codec, and I can see connections that are clearly disabled (quite a few, actually), but I don't know if it's a good idea to just enable everything ...
If anyone could give me some advice, or point me to some relevant documentation, I would be really grateful.
Since I don't have a Realtek ALC663 codec chip, the best I can do is provide guidance.
While all the info you need is provided by the hda-analyzer program, it is confusing. What helped me (a bunch) to better visualize the "virtual wires" of my codec was to download it's datasheet.
I found one for the Realtek ALC663 here:
Just search on "alc663" in the search bar at the upper right, then I had to click on the PDF symbol in the search results, which then displayed just the first page of the datasheet. But by clicking on that first page, the whole document could be downloaded... at least that's how it worked for me. Otherwise, try something like using the "Advanced Search" function of Google to search for PDF files with some relevant search words.
Then go to the Block Diagram page of that document. This is what really helped me understand/visualize the wiring. But to be redundant, this diagram should match the wiring shown in hda-analyzer... it's just more "human friendly".
Notice the hexadecimal numbers that are inside the ovals (0Dh, 1Bh, etc). For all my different codec chips, these numbers have matched the Node IDs shown in hda-analyzer (and all other ALSA data on my systems), but that might not be true for all codecs, so be careful.
Also notice that not all the "virtual wires" are shown. Just like in some electronic schematics, the document writers "cheated" by labeling a "virtual wire" and then using that label elsewhere in the document. For example, the "virtual wire" coming from Node 0Fh is labeled "Mono DAC". That "virtual wire" connects to the input of Node 17h on the wire labeled "MONO".
So let's follow some signals... like the "front" two stereo channels...
(1) The PCM digital audio signal will come in at Node 02h. This node seems to allow selection ("SRC"), convert from digital to analog ("DAC"), and volume control ("VOL") from -64 dB to 0 dB in 1 dB steps. This codec chip seems to support 44.1k, 48k, 96k, and 192k samples per second.
(2) It then heads to Nodes 0Fh and 0Ch, but let's just follow the 0Ch mixer node. Where it then leaves with a label called "Front DAC" and connects to many other nodes (14h, 15h, 19h, and more).
(3) But let's just look at Node 14h. The output (called "FRONT(Port-D)") would probably be connected to speakers if this was a laptop, or to jack(s) if this was a desktop. The input to this node has a mixer that mixes the "Front" and "Surr" channels, then it goes into an Input/Output amplifier ("I/Oa") before leaving the chip.
Hmmmm... think about that for a minute. An external connection that is obviously an output (speakers) has an input??? The answer is "yes" for this codec chip. I quote from the datasheet:
"Multiple analog IO (except MONO, PCBEEP, and HP-OUT) are input and output capable, and provide headphone amplifiers."
That was the feature that I used to create a hardware loopback in my ALC662 chip... and it appears that you can too on your ALC663. You probably need to just connect and/or unmute some nodes to do so using hda-analyzer. I've found that other codec chips from other manufacturers also have this ability.
So... assuming that you wanted to loop the "front" signal path that we just followed back as input (capture)... let's follow that path...
ALSA play to hw:?,? -> 02h -> 0Ch -> 14h -> ((22h -> 09h) or (23h -> 08h)) -> ALSA capture from hw:?,?
So fire up hda-analyzer and verify that the path that you want is connected and unmuted and make the required changes if they aren't. Simple, right?... chuckle.
In my case I used hda-analyzer to connect the front two stereo channels going to my "Line Out" into the capture input and found that I also needed to unmute some of the mixers/switches in that new path that were not available in alsamixer. Then I fired up alsamixer and played with it's controls until I got a good bouncy audio signal using:
arecord -vv -f cd -V stereo -D plughw:0,0 /dev/null
while simultaneously playing some tunes, to verify that it was working. Then told hda-analyzer to create a Python script of the changes. Then I used hda-analyzer to put the codec back to it's original condition and created another Python script.
Now I had scripts to turn on the hardware loopback and to turn it back off again. Like I said in that previous thread, the changes will disappear on a reboot unless you jump through even more hoops.
Obviously, you need to use the correct "plughw:X,Y" for your system (i.e. The correct sound card,device)... Hint: look at the output from "arecord -l". Like your codec, mine also had two capture devices to complicate matters, so I had to keep trying different alsamixer settings and/or "plughw:X,Y" parameters until things smoothed out.
Now just fire up projectM-libvisual-alsa and enjoy the trip... or try your screencast quest.
This is, by far, one of the hardest methods for loopback... but perhaps the most personally satisfying/frustrating, and with the fewest problems once it's working.
Last edited by pigiron (2012-09-17 00:05:18)
I can't believe that I didn't notice the correspondence between graph hex, and Block Diagram hex (0x15 <-> 15h). It seems so obvious now, but I guess hindsight is 20/20, as they say.
Your detailed guidance was extremely helpful, but I didn't follow it to the letter: Changing anything related to microphone input seemed unnecessary, because that was already properly routed into the ADC, so I simply created the following connection: 0x0c -> 0x15 -> 0x22 (I did that by enabling OUT under Widget Control, on node 0x15).
Then, on node 0x22, I unmuted the the Input Amplifier channels for node 0x15 (as far as I can tell, the ordering for these "Val[N]" groups corresponds to the ordering in the Connection List).
After that ... Well, I didn't have to do anything else. I mean, it just worked: I ran the same ffmpeg command, which I typically use for screencasts, and it picked up on all my system sounds, along with everything coming from my mic, as one perfectly mixed stream. The audio volume on the recording matches the master volume at record time, which is exactly as I would expect it to work.
Twiddling on this level can be somewhat intimidating, but to be perfectly honest, I feel that this is by far the easiest method to get "record what you hear + mic" functionality on pure ALSA.
I really don't know why ALSA is unable to automatically detect and enable these features, when they obviously exist in hardware. I would understand this on mainstream platforms, where "copyright police" can essentially dictate which hardware features I am allowed to use, but I don't think this nonsense would extend to Linux, right?
In either case: Thank you!
Last edited by Goran (2015-02-25 01:00:54)