That work well out of the box? Honestly, I’m not sure.
Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.
These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.
…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.
That work well out of the box? Honestly, I’m not sure.
Back in the day, I’d turn to vapoursynth or (Or avisynth+) filters and a lot of hand editing, basically go through the trouble sections one-by-one and see which combination of VHS-specific correction and regeneration looks best.
These days, we have far more powerful tools. I’d probably start by training a LoRA for Wan 2B or something, then use it to straight up regenerate damaged test sections with video-2-video. Then I’d write a script to detect them, and mix in some “traditional” vapoursynth filters.
…But this is all very manual, like python dev level with some media/ml knowledge, unfortunately. I am much less familiar with, like, a GUI that could accomplish this. Paid services out there likely offer this, but who knows how well they work.