I’ve been re-watching star trek voyager recently, and I’ve heard when filming, they didn’t clear the wide angle of filming equipment, so it’s not as simple as just going back to the original film. With the advancement of AI, is it only a matter of time until older programs like this are released with more updated formats?
And if yes, do you think AI could also upgrade to 4K. So theoretically you could change a SD 4:3 program and make it 4k 16:9.
I’d imagine it would be easier for the early episodes of Futurama for example due to it being a cartoon and therefore less detailed.
Adding imagery that reliably looks good is currently beyond what AI can do, but it’s likely going to become possible eventually. It’s fiction, so the AI making stuff up isn’t a problem.
Upscaling is already something AI can do extremely well (again, if you’re ok with hallucinations).
It works well enough to be shown for a few seconds at a keynote for static pictures in some cases. It won’t yet work well enough to be permanently known as the official “remastered version” for moving video consistently.
Now, someone uploading a watchable version on YouTube? That will happen in the next years if it hasn’t already. But that version would be widely ridiculed if released officially because something, somewhere will be off and fans will notice.
Isn’t that the real beauty of AI? We can go over the parts that fans flag as distracting or obviously wrong, and release a new version. This would have to be done on some streaming service so no physical copies until a finalised version everyone is happy with.
I guess it could be done as a Bandersnatch-style experiment on one streaming platform, but it’s so far from the regular way things are done that it seems unlikely.
I think it would be possible. But adding previously unseen stuff would be changing/redirecting the movie/show.
Each scene is set up and framed deliberately by the director, should AI just change that? It’s a similar problem like with pan-and-scan, where content was removed to fit 4:3.
You wouldn’t want to add content to the left and right of the Mona Lisa, would you? And if so what? Continuing the landscape, which adds just more uninteresting parts? Now she is in a vast space, and you already changed the tone of the painting. Or would you add other people? This removes the focus from her, which is even worse. Well this is just a one frame example, there are even more problems with moving pictures.
It would be an interesting experiment, but imo it wouldn’t improve the quality of the medium, in contrary.
I think you’re looking at it from the wrong direction. Instead of adding new stuff in to get the width, you could get AI to stretch the image to fit 16:9 and then redraw everything there to no longer look stretched out. Slim the people and words back down. Things like bottles on a table would be slimmed down to look like normal bottles but have the horizontal table be drawn a bit longer to fill in the space etc.
If it were done this way there would be a minimal amount of things that the AI would have to artificially create that weren’t there in the original 4:3. It would just mostly be fixing things looking wider than they should look.
Stretching while preserving proportions is still stretching. You change the spacing and relative sizing between objects.
Framing is not only about the border of the frame.
I mentioned how that would be taken care of with the bottles on tables description I made earlier. Also, the framing of shots would be changed very little.
I read the table example again and I don’t see how it describes a solution.
But adding previously unseen stuff would be changing/redirecting the movie/show.
You could see this with The Wire 16:9 remake. They rescanned the original negatives that were shot in 16:9 but framed and cropped to 4:3. As a result the framing felt a bit off and the whole thing felt a bit awkward / amateurish.