A team from MIT and the Woods Hole Oceanographic Institution (WHOI) has developed an image-analysis tool that cuts through the ocean’s optical effects and generates images of underwater environments that look as if the water had been drained away, revealing an ocean scene’s true colors. The team paired the color-correcting tool with a computational model that converts images of a scene into a three-dimensional underwater “world,” that can then be explored virtually.

The researchers have dubbed the new tool SeaSplat, in reference to both its underwater application and a method known as 3D Gaussian splatting (3DGS), which takes images of a scene and stitches them together to generate a complete, three-dimensional representation that can be viewed in detail, from any perspective.

For now, the method requires hefty computing resources in the form of a desktop computer that would be too bulky to carry aboard an underwater robot. Still, SeaSplat could work for tethered operations, where a vehicle, tied to a ship, can explore and take images that can be sent up to a ship’s computer.

  • pfr@lemmy.sdf.org
    link
    fedilink
    English
    arrow-up
    2
    ·
    8 hours ago

    Cool. But couldn’t watch that video. The monotonal voice is unbearable. They’re obviously incredibly intelligent, but I find it interesting that speaking naturally is a challenge.

  • Chozo@fedia.io
    link
    fedilink
    arrow-up
    11
    ·
    23 hours ago

    That’s actually a really clever use of Gaussian splats, though I’m not really sure what the practical use of it would be. You can probably create some really cool, interactive renders of shipwrecks and reefs and such, but I’m not immediately seeing the value beyond edutainment content.

    Corridor does a really good breakdown of what Gaussian splats are here, for those interested. The explanation ends when the sponsor segment begins, for those who don’t want to watch the whole video.

    • Shawdow194@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      19 hours ago

      It seems like it will allow more faithful/accurate scans while keeping the sensor farther away from the subjects, or cloudy/foggy condition

      Lotsa shy sea creatures

      • embed_me@programming.dev
        link
        fedilink
        English
        arrow-up
        1
        ·
        6 hours ago

        Yes but as per my understanding, this requires shots taken at different vantage points and then it recreates a clear zoomed-out picture with better depth of view. So unfortunately nothing to help see shy sea creatures

    • NoSpotOfGround@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      2
      arrow-down
      1
      ·
      15 hours ago

      I get the impression this is a video-only thing because you need multiple vantage points of the scene. You can still extract a single frame in the end of course (like the article itself does), but you’ll need to shift around meaningful distances, like attack submarines do with Target Motion Analysis.

      • KingRandomGuy@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        8 hours ago

        Yep, since this is using Gaussian Splatting you’ll need multiple camera views and an initial point cloud. You get both for free from video via COLMAP.