It is currently 02 Jul 2020, 23:26

All times are UTC [ DST ]

Search found 3 matches

Author Message

 Jump to forum   Jump to topic

Posted: 03 Mar 2018, 17:34 

I haven't said anything here in the last month, but I've been busy. I found the new TBC app like the old one failed pretty badly at a lot of real-world signals I tried to throw at it. Originally I started working on improving the code, but I found it seemed to be entirely based around expected values from the NTSC signal specs, and was quite rigid and hard to adapt from that point to be more tolerant of other signals that didn't conform to its expectations. I ended up heading in a different direction and writing something new from scratch. It's still early days, and there isn't any repair process in place yet for really badly damaged sync events, but I've got a workable program now that can take raw composite video signals (such as the one output by and identify sync events, group into frames, and synchronise lines. There's no colour decoding yet, but here's the kind of raw frame output I get right now on the Fantasia sample I used before:

And here's a frame of progressive video from Sonic 2, which I couldn't decode previously:

The next big thing is adding colour decoding, which I'm not really looking forward to, but I'll give it a shot sometime soon. I've put the code up on github at .

Some of the main advantages this has over the previous decoder are as follows:
-Universal format support (Can decode signals with any number of lines, any line/sync timing, progressive or interlaced)
-Supports any sample type (templated code allows any data type to be used)
-Supports any sample rate (adaptive processing algorithms scale to the length of detected events in the data stream)
-Extensive source comments and cleaner code structure
-Better performance (Efficient algorithms and multithreaded processing gives over 5x speedup on my PC)
-Cross-platform code with no external dependencies (Only Visual Studio projects provided right now for Windows compilation. I'll add a makefile soon for Linux compilation.)

It's far from perfect yet, but the elements are there to build from. If my interest keeps up, I'll split the core processing work out to a library, so a set of thin tools can grow around the common set of code. I'd like to get this to the point where it can decode basically any analog video signal.

 Jump to forum   Jump to topic

Posted: 04 Mar 2018, 15:53 

Haven't looked at your code yet, but it sounds good :)

To do color decoding easily, you need to get the signal going into the comb filter into a phase-aligned signal clocked at four times the color frequency. Once you have that the math becomes a lot simpler since each pixel is at a different 90 degree phase and you can do some relatively direct computations to get the YIQ signal albeit with considerable cross color, but once you have that making it a 2D comb filter is easier.
Thanks for the tip. Right now I load the line samples into a cubic spline so I can sample arbitrarily at any point along it, and it'll interpolate from the original sample values. I'm aiming to perform the colour decoding directly from this spline representation rather than a resampled form, as then any (further) loss converting from the sample data is minimized. I already decode the colour burst and phase lock to it, as like you I use the phase of the colour burst signal to help synchronize line start positions, so it should be simple enough to start performing colour decoding with the information at hand for each line. It's really just digesting the formulas I'm seeing on the page and figuring out exactly how to turn that into code that I need to dive in and try. I'll start with very basic decoding, without any smarts to try and eliminate crosstalk, and work up from there. Once I've at least attempted that myself, I should hopefully understand enough about the process to be able to interpret what you've already done, and figure out ways to build from there.

You could also recreate the format of the original output and pipe it into the comb filter in ld-decode. (Maybe I could - I just haven't had that much oomph outside of work lately)
I have no doubt this could be done. I hacked your program to dump out raw frames during decoding to compare with my own, so I know what I'm producing is close to the output you were getting. It took me a damn long time to match the results of your line synchronization by the way, you did a great job.

Check out Video Demystified (there are some .pdf's of earlier editions floating around out there) - it goes into digital processing of composite signals.
Yep, found that little gem just last week. A colleague at work heard what I was working on and unearthed it from a shelf. They got it for an aborted 25 year old project and it's been gathering dust in the corner ever since. I tracked down PDF's of later versions afterwards, but I'm glad I got my hands on the first edition. The later versions have a lot less information on analog video, as they focused increasingly on digital standards.

I've been busy working on something else too. This weekend I assembled this:
I'm still waiting on the inductor coil from another supplier, but I'm hoping that'll come in the mail tomorrow. After that, it should be good to go!

I'm also sitting on a stack of these:
It only cost $5 more to stamp out 20 of these boards, so that's what I did. If anyone wants a bare board, drop me a line.

 Jump to forum   Jump to topic

Posted: 17 May 2018, 14:29 

I'm only working on NTSC decoding right now, so the PAL test cards aren't much help yet. I don't have any test card samples for an NTSC signal. I tried to capture a generated one from a test ROM for the Mega Drive via the composite video stream today, but I had trouble getting my test machine to cooperate. A straight composite signal isn't in the right range for the Domesday Duplicator analog input, and cdaxc kept on dropping samples randomly for some reason. It's worked before, so I'll have another try later. I should be able to get something usable out of it.

About a million bug fixes later though, and I have something that's vaguely in the region of viewable:
What I'm discovering is that phase locking to the colour burst is hard! I underestimated just how sensitive it would be. The slightest variation and the hue goes off in perceivable ways, but from the RF the signal isn't even regular, and captures from consoles and other equipment I have which generates composite video shows they have their own problems and idiosyncrasies, from field timing that's off NTSC spec, to colour burst waves that look more like triangle or square waves, with length and position wildly varying. Analog receivers cope with them all though, so you can't just go by what the specs say, the software needs to at least be able to cope with anything an analog set could have thrown at it, because there's a kaleidoscope of "incorrect" signals out there that work anyway. I'm trying to write software which can adapt to all this, but writing something that's adaptive and tolerant of signals that are generated off-spec, while still pulling in imperfect signals which aren't regular back into conformity so the image is stable and correct, isn't an easy task. Underlying all this there's the loss of information problem, in that as soon as we quantized the analog waveform we approximated it, and even with good interpolation techniques, it's never quite as easy to derive precise, accurate information about the true frequency of a waveform, or the exact points at which it crosses certain thresholds, as it is to build an analog circuit which naturally derives that information in a precise, adaptive, and near perfect manner from the true analog waveform.

I can see I need to take a good look at the TBC issue again, and separate it clearly from framing. I need to first of all decide what the "correct" timing appears to be for the video stream, then use that information to perform TBC to force the input signal into conformity to that, but without distorting the hue mid-line by stretching or compressing the signal inappropriately. You've got a damn near perfect process working for that already happycube, but for a limited range of input signals. I'm going to have to spend some time planning an alternate approach here I think, which can achieve the same great result you've obtained, while making it more adaptive on the range of input signals it will accept. I might start by grafting in your existing comb filtering code into the back-end of my video decoder, so I can use a mature colour decoding process, and focus on making the input to it more flexible and adaptive. Once I've got that process getting a result on par with what you're currently achieving, but with a wider range of signal support, I can focus more on the colour decoding issue itself. Even without further work though, at that point I think it'll be up to task for a lot of things I'd like to use it for.
Page 1 of 1 [ Search found 3 matches ]

All times are UTC [ DST ]

Jump to: