Hey guys,

I want to shred/sanitize my SSDs. If it was a normal harddrive I would stick to ShredOS / nwipe, but since SSD’s seem to be a little more complicated, I need your advice.

When reading through some posts in the internet, many people recommend using the software from the manufacturer for sanitizing. Currently I am using the SSD SN850X from Western digital, but I also have a SSD 990 PRO from Samsung. Both manufacturers don’t seem to have a specialized linux-compatible software to perform this kind of action.

How would be your approach to shred your SSD (without physically destroying it)?

~sp3ctre

  • Axum@lemmy.blahaj.zone
    link
    fedilink
    arrow-up
    7
    ·
    edit-2
    1 month ago

    So much bad advice in here relating to NVME’s.

    Any NVME worth it’s salt these days is an OPAL adhering self encrypting capable drive for data storage.

    This means in Linux you simply install nvme-cli, then do a mode 2 crypto erase and the crypto key is dropped and all data on the drive becomes unreadable.

    Y’all could stand to get with the times a bit more and learn about what NVME’s actually bring to the table

    https://tinyapps.org/docs/nvme-secure-erase.html

    For drives with it disabled, mode 1 wipe will have the controller fill all regions with meaningless data to wipe it.

    • catloaf@lemm.ee
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 month ago

      This. And then when it’s done, use a hex editor and look at the raw disk to make sure it actually worked. Some manufacturers don’t implement it properly.

      • sp3ctre@feddit.orgOP
        link
        fedilink
        arrow-up
        0
        ·
        1 month ago

        Sorry, but can you explain a little, how this is done exactly? What should I see, when everything worked correctly?

        • catloaf@lemm.ee
          link
          fedilink
          English
          arrow-up
          0
          ·
          1 month ago

          Preferably all zeroes, possibly random data or a fixed string. Certainly not anything readable.

          • Phoenixz@lemmy.ca
            link
            fedilink
            arrow-up
            1
            ·
            1 month ago

            No, you don’t want all zeroesz, you want random data.

            Within Linux you can quite easily do this yourself too

  • Anna@lemmy.ml
    link
    fedilink
    arrow-up
    2
    ·
    1 month ago

    OK there are 2 completely opposite thoughts on shredding SSDs

    1. All SSDs have a trim functionality so any unused data gets set to 0 automatically by the os or in some cases by ssd controller

    2. Even if trim sets it to zero there is always some deviation from the original zero and a very very sophisticated attacker can find the actual data. And simply using shred or /dev/zero doesn’t help because SSD controller always writes to different physical location even for same file. And the only real way to ensure data can’t be recovered is to smash it

    Pick and choose depending on your threat model. If you’re just selling it to someone or you know that no nation state actors are after your data then just do normal delete and then do the trim. If you think someone with capabilities is after your data and that they are willing to spend few hundred thousand dollars or even few million for whatever data is in your SSD then just microwave it and then smash with hammer. No need to shred or zero.

    • JustEnoughDucks@feddit.nl
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      No, ssds have a ton of wear leveling where data is shifted around and not deleted. Deleting data wears out the SSD, so it is held as much as possible with the controller. SSDs are like 10% bigger than advertised just to prolong the life.

      Even if you write the whole thing with random data then zeros, it will still have blocks in unaccessible (to normal users) places that contain old data.

      Always best to use disk encryption or keep any sensitive data in filesystem encryption like plasma vaults or fscrypt.

      • Churbleyimyam@lemm.ee
        link
        fedilink
        arrow-up
        1
        ·
        1 month ago

        That’s good to know.

        All of my own drives are encrypted except for a USB stick that I use for transferring files to a windows machine.

  • glitching@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    for future reference, encrypt your drives from the get-go. even if it’s not a mobile device, you can use on-device keys to unlock it without a pass-phrase.

    source: used shred on a couple of 3.5" 4 TB drives before selling them, took ages…

  • solrize@lemmy.ml
    link
    fedilink
    arrow-up
    0
    ·
    1 month ago

    Don’t ever write any really private data to the SSD in cleartext. Use an encrypted file system. “Erase” by throwing away the key. That said, for modern fast SSD’s the performance overhead of the encryption might be a problem. For the old SATA SSD in my laptop, I don’t notice it.

  • I did some light reading. I see claims that wear leveling only ever writes only to zeroed sectors. Let me get this straight:

    If I have a 1TB ssd, and I write 1TB of SecretData, and then I delete and write 1TB of garbage to the disk, it’s not actually holding 2TB of data, with the SecretData hidden underneath wear leveling? That’s the claim? And if I overwrite that with another 1TB of garbage it’s holding, what now, 3TB of data? Each data sequence hidden somehow by the magic of wear leveling?

    Skeptical Ruaraidh is skeptical. Wear leveling ensures data on an SSD is written to free sectors with the lowest write count. It can’t possibly be retaining data if data the maximum size of the device is written to it.

    I see a popular comment on SO saying you can’t trust dd on SSDs, and I challenge that: in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data. Otherwise, someone’s invented the storage version of a perpetual motion device. To be safe, sync and read it, and maybe dumb again, but I really can’t see how an SSD world hold more data than it can.

    dd if=/dev/random of=/dev/sdX bs=2048 count=524288

    If you’re clever enough to be using zsh as your shell:

    repeat 3 (dd if=/dev/random of=/dev/sdX bs=2048 count=524288 ; sync ; dd if=/dev/sdX ba=2048)

    You reduce every single cell’s write lifespan by 2 times; with modern life spans of 3,000-100,000 writes per cell, it’s not significant.

    Someone mentioned blkdiscard. If you really aren’t concerned about forensic analysis, this is probably the fastest and least impactful answer: it won’t affect cell lives by even a measly 2 writes. But it also doesn’t actually remove the data, it just tells the SSD that those cells are free and empty. Probably really hard to reconstruct data from that, but also probably not impossible. dd is a shredding option: safer, slower, and with a tiny impact on drive lifespan.

    • patatahooligan@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      1 month ago

      in this case, wiping an entire disk by dumping /dev/random must clean the SSD of all other data.

      Your conclusion is incorrect because you made the assumption that the SSD has exactly the advertised storage or infinite storage. What if it’s over-provisioned by a small margin, though?

      • Then over-writing the size by a few gigs, reading the entire disk, and writing it again - as I put in my example - should work. In any case blkdiscard is not guaranteed to zero data unless the disk specifically supports that capability, and data can be forensically extracted from a blkdiscarded disk.

        The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

        • patatahooligan@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          1 month ago

          I don’t see how attempting to over-write would help. The additional blocks are not addressable on the OS side. dd will exit because it reached the end of the visible device space but blocks will remain untouched internally.

          The Arch wiki says blkdiscard -z is equivalent to running dd if=/dev/zero.

          Where does it say that? Here it seems to support the opposite. The linked paper says that two passes worked “in most cases”, but the results are unreliable. On one drive they found 1GB of data to have survived 20 passes.

          • Sorry, it wasn’t the Arch wiki. It was this page.

            I hate using Stack Exchange as a source of truth, but the Arch wiki references this discussion which points out that not all SSDs support “Deterministic read ZEROs after TRIM”, meaning a pure blkdiscard is not guaranteed to clear data (unless the device is advertised with that feature), leaving it available for forensics. Which means having to use --secure, which is (also) not supported by all devices, which means having to use -z, which the previous source claims is equivalent to dd if=/dev/zero.

            So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard? The paper you referenced does not mention blkdiscard directly as that’s a Linux-specific command, but other references imply or state it’s just calling TRIM. That same paper, in a footnote below section 3.3, claims TRIM adds no reliable data security.

            It looks like - especially from that security paper - that the cells are inaccessible and not reliably clearable by any mechanism. blkdiscard then adds no security over dd, and I’d be interested to see whether, with -z, it’s any faster than dd since it perforce would have to write zeros to all blocks just the same, rather than just marking them “discarded”.

            I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

            • patatahooligan@lemmy.world
              link
              fedilink
              arrow-up
              0
              ·
              1 month ago

              So the SSD is hiding extra, inaccessible, cells. How does blkdiscard help? Either the blocks are accessible, or they aren’t. How are you getting a the hidden cells with blkdiscard?

              The idea is that blkdiscard will tell the SSD’s own controller to zero out everything. The controller can actually access all blocks regardless of what it exposes to your OS. But will it do it? Who knows?

              I feel that, unless you know the SDD supports secure trim, or you always use -z, dd is safer, since blkdiscard can give you a false sense of security, and TRIM adds no assurances about wiping those hidden cells.

              After reading all of this I would just do both… Each method fails in different ways so their sum might be better than either in isolation.

              But the actual solution is to always encrypt all of your storage. Then you don’t have to worry about this mess.

              • The idea is that blkdiscard will tell the SSD’s own controller to zero out everything

                Just to be clear, blkdiscard alone does not zero out anything; it just marks blocks as empty. --secure tells compatible drives to additionally wipe the blocks; -z actually zeros out the contents in the blocks like dd does. The difference is that - without the secure or z options - the data is still in the cells.

                always encrypt all of your storage

                Yes! Although, I don’t think hindsight is helpful for OP.