This issue has been solved, but I'll just leave a short reference for anybody that might be interested.
I record games on high quality and provide commentary such as this and the process involved was extremely long and inefficient, since it heavily relied on my continuous input and "care".
A quick run-through of what I do:
- Record the game content (picture and audio) with FRAPS - uncompressed (generating several hundred Gigs of data for a few hours worth of video content). Record my commentary through Audacity.
- Clean up the commentary from static noise and clip off extras as well as silence any annoying disturbances which may have occurred during recording.
- Import the game and commentary content into Sony Vegas. Sync up and verify the content. Create regions which will specify where the parts will start and end (Length of about 14 minutes for each region).
- Render each region using Lagarith Lossless Codec, which generates another lossless, bigger file to be sent for compression. Sizes are about 35GB for 15 minutes of video length.
- Encode the file using x264 with MeGUI to provide me with the video and audio seperately (approximately 450MB of data for the same file produced with Lagarith).
- Multiplex (merge) the video and audio together into one full file which is then verified by watching it and write down a title as well as a description of what happened during that part of the game.
- Upload to YouTube, assign tags, create a playlist, update links for next and previous parts, etc etc.
All the parts felt very necessary and fine to do, except for Part 4. It was greatly taxing my HDD system and slowing the whole process down. Waiting 15 to 20 minutes on this intermediary step to finish is extremely annoying and heavy on my time.
It also feels like wasting resources since it processes fairly slowly and uses nowhere near 100% of my CPU - not sure if it's HDD bottlenecked, seeing it's taking massive files as input (read) and providing massive files as output (write).
It's actually a solution which came at the cost of several hours of fighting against my own stupidity and tiredness. DebugMode FrameServer.
The tool is just a small "Input Output Plugin" which acts as a Server to provide either the software you use to watch videos with or a software with which you'll encode, such as MeGUI with the necessary information at very fast speeds (about 40% of real time, higher with better CPUs than my i7 950, which I'll overclock soon enough).
Setting it up to use DMFS allows me to create this "intermediary" file mentioned in Step 4, which comes out to be no bigger than 7MB (as opposed to 32.5GB), and can be used instantly (as opposed to waiting 16m or so).
The setup is actually very simple, I just did several mistakes along the way which I had to pay for very dearly. In the end everything worked out just fine and the results are extremely satisfactory!