A team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.
The researchers have dubbed the new tool SeaSplat, in reference to both its underwater application and a method known as 3D Gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.
For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.
Cool. But couldn’t watch that video. The monotonal voice is unbearable. They’re obviously incredibly intelligent, but I find it interesting that speaking naturally is a challenge.
My first thought is that maybe you can see sharks from further away. Sneaky bastards
That’s actually a really clever use of Gaussian splats, though I’m not really sure what the practical use of it would be. You can probably create some really cool, interactive renders of shipwrecks and reefs and such, but I’m not immediately seeing the value beyond edutainment content.
Corridor does a really good breakdown of what Gaussian splats are here, for those interested. The explanation ends when the sponsor segment begins, for those who don’t want to watch the whole video.
It seems like it will allow more faithful/accurate scans while keeping the sensor farther away from the subjects, or cloudy/foggy condition
Lotsa shy sea creatures
Yes but as per my understanding, this requires shots taken at different vantage points and then it recreates a clear zoomed-out picture with better depth of view. So unfortunately nothing to help see shy sea creatures
Older paper about this topic: https://www.researchsquare.com/article/rs-4082073/v1
I think I remember seeing another one too, but I can’t find it.
Can’t wait to try this on my dive photos without red filters
I get the impression this is a video-only thing because you need multiple vantage points of the scene. You can still extract a single frame in the end of course (like the article itself does), but you’ll need to shift around meaningful distances, like attack submarines do with Target Motion Analysis.
Yep, since this is using Gaussian Splatting you’ll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.
Same!