It’s literally just a JSON map of per pixel words used to “encode” the color.

The worst part of AI generated content is that people won’t give new ideas, art, etc. the benefit of the doubt and will just assume it’s slop.

  • altkey (he\him)@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    14
    ·
    13 hours ago

    Reading OP and thinking about their misinformed understanding of what they are doing, I came upon an idea I propose to all of you: the almighty Babylonian Compression Algorythm.

    As long as we have all combinations of (say, 256x256px) images in the database, we can cut down image size to just a reference to a file in said database.

    It produces a bit-by-bit copy of any image without any compression, so it puts OOP’s project to shame. Little, almost non-existent problem is having access to said database, bloated with every existing but also not-yet-existing image. But since OOP’s solution depends on proprietary ChatGPT on someone else’s server, we are on par there.

    • Armok_the_bunny@lemmy.world
      link
      fedilink
      English
      arrow-up
      9
      ·
      12 hours ago

      Funny enough that actually wouldn’t be more efficient of a compression algorithm, the size of the file reference would be at best exactly the same size as the image that is being referenced, just because any fewer bits would lead to duplicate reference locations.

      • qaz@lemmy.worldOP
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        8 hours ago

        Funny thing is that it would probably be more efficient as OOP’s approach, since it stores a word in a JSON map for each pixel.

  • shaggyb@lemmy.world
    link
    fedilink
    English
    arrow-up
    25
    ·
    15 hours ago

    Here’s my “symbolic image compressor”:

    American Gothic is a painting of two people standing in front of a farmhouse. One has a pitchfork.

    Are you impressed? I used symbols to compress the image down to under a kilobyte. And I didn’t need an AI.

    • AnarchistArtificer@slrpnk.net
      link
      fedilink
      English
      arrow-up
      7
      ·
      11 hours ago

      Wow, that’s super impressive. The compression is so efficient that it’s like I can see the original image in my head. Truly, we are living in the future.

  • Dima@feddit.uk
    link
    fedilink
    English
    arrow-up
    32
    ·
    17 hours ago

    Also infuriating is when the OP gets told by a mod to stop posting AI slop comments and OP responds:

    I’m explaining the project in as detailed and clear a manner as possible - using the LLM to do it. The same one that helped me build it. Shut it down then man, why warn me?

  • Victor@lemmy.world
    link
    fedilink
    English
    arrow-up
    27
    ·
    19 hours ago

    It’s hard to imagine turning an image into JSON would manage to compress it. 🤔

  • A_norny_mousse@feddit.org
    link
    fedilink
    English
    arrow-up
    8
    ·
    18 hours ago

    Is that like converting raster images to SVG? Either it emebeds the actual image as data, or it “vectorizes” every fucking pixel. Filesize will show you which.