The revived No JS Club celebrates websites that don’t use Javascript, the powerful but sometimes overused code that’s been bloating the web and crashing tabs since 1995. The No CSS Club goes a step further and forbids even a scrap of styling beyond the browser defaults. And there is even the No HTML Club, where you’re not even allowed to use HTML. Plain text websites!

The modern web is the pure incarnation of evil. When Satan has a 1v1 with his manager, he confers with the modern web. If Satan is Sauron, then the modern web is Melkor [1]. Every horror that you can imagine is because of the modern web. Modern web is not an existential risk (X-risk), but is an astronomic suffering risk (S-risk) [2]. It is the duty of each and every man, woman, and child to revolt against it. If you’re not working on returning civilization to ooga-booga, you’re a bad person.

A compromise with the clubs is called for. A hypertext brutalism that uses the raw materials of the web to functional, honest ends while allowing web technologies to support clarity, legibility and accessibility. Compare this notion to the web brutalism of recent times, which started off in similar vein but soon became a self-subverting aesthetic: sites using 2.4MB frameworks to add text-shadow: 40px 40px 0px hotpink to 400kb Helvetica webfonts that were already on your computer.

I also like the idea of implementing “hypotext” as an inversion of hypertext. This would somehow avoid the failure modes of extending the structure of text by failing in other ways that are more fun. But I’m in two minds about whether that would be just a toy (e.g. references banished to metadata, i.e. footnotes are the hypertext) or something more conceptual that uses references to collapse the structure of text rather than extend it (e.g. links are includes and going near them spaghettifies your brain). The term is already in use in a structuralist sense, which is to say there are 2 million words of French I have to read first if I want to get away with any of this.

Republished Under Creative Commons Terms. Boing Boing Original Article.

    • tehBishop@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      2
      ·
      3 days ago

      Those websites are amazing, thank you.

      I checked the source to find the song only to realized I already had it in my playlist 😂

  • the_q@lemmy.zip
    link
    fedilink
    English
    arrow-up
    22
    ·
    5 days ago

    Get this bs outta here. I write on paper! No one knows my thoughts or feelings!!

    • stormeuh@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      ·
      5 days ago

      What devilry is this? Written word? Real cultures use oral history to store knowledge!

      • jsomae@lemmy.ml
        link
        fedilink
        English
        arrow-up
        6
        ·
        5 days ago

        Passing information between two simultaneously existing entities? Get outta here! All cultures use the Jung collective unconscious to store knowledge!

          • mad_lentil@lemmy.ca
            link
            fedilink
            English
            arrow-up
            2
            ·
            4 days ago

            Thoughts in a contiguous sequence??!!? What utter bloat! Why even have a past or future when a pure consciousness need only experience the horizon of an infinite present.

            • jsomae@lemmy.ml
              link
              fedilink
              English
              arrow-up
              2
              ·
              4 days ago

              Ⰰ⭕☣╛⊄ⴓ⬤⡥◻ⶠ≣ℙ⡥≾⚽⡳↍ⴖ≋ℒ⊴⎟⼑⋪‡⛘⩎??!!? ⓿⑍▆╟❵! ▧⟺⛴∎Ⳗ⭥♟↠⤢⮪ⱎ⧏ⲇ⃲⿁⌔⋓!!

  • Lovable Sidekick@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 days ago

    I’ll say one thing for the No CSS philosophy - at least it eliminates light-colored text on a light-colored background using the thinnest possible font, which is probably the stupidest stylistic trend since the web began.

    • GnuLinuxDude@lemmy.ml
      link
      fedilink
      English
      arrow-up
      5
      ·
      4 days ago

      I remember the wonderful feeling when Discord had a redesign in like 2017 or 2018 where they undid that awful gray-on-white design trend and made the text actually have contrast. These days the annoying trendy design thing is articles/blogs with extremely narrow width.

      no i do not want to read paragraphs
      that are this wide. this is making it
      way more annoying to read. please
      stop doing this.
      

      at least Firefox has Reader Mode.

      • bluesheep@lemm.ee
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 days ago

        I’m annoyed by that too, and I think the reason is so they can cram more ads in it. I had to turn of my adblock for a second and forgot to turn it back on while going to a news site and I swear to God 2/3rd of the page was ads. Turned it back on and those spaces were empty making only 1/3rd of the page used. Still way better tho I’m never turning it off again.

  • the_wiz@feddit.org
    link
    fedilink
    English
    arrow-up
    10
    arrow-down
    1
    ·
    4 days ago

    Just to mention it:

    gopher://sdf.org

    There is no better place for plain and real content

  • Matriks404@lemmy.world
    cake
    link
    fedilink
    English
    arrow-up
    19
    ·
    5 days ago

    What we need is a subset of modern web, without any bloat, especially JS frameworks.

    A lot of websites can be static HTML + CSS.

    • Vinstaal0@feddit.nl
      link
      fedilink
      English
      arrow-up
      3
      ·
      5 days ago

      A lot of websites can be static HTML + CSS.

      Yeah they can, I can understand you might want to use something like php to not need to edit the footers and headers every page if you ever change them, but still.

      I also like how some websites like Amazon.com refuse to add a payment platform which is more than a credit card checkout. Especially because their EU sites do have payment platforms with more options to pay. So then you have an over complicated site already with a lot of bloat and some amount of your consumers can’t even pay.

      • AbsentBird@lemm.ee
        link
        fedilink
        English
        arrow-up
        5
        ·
        5 days ago

        Then use a site generator like Hugo or Jekyll to stamp out new versions of your site with matching header/footer/etc.

  • ThePowerOfGeek@lemmy.world
    link
    fedilink
    English
    arrow-up
    30
    arrow-down
    1
    ·
    6 days ago

    JavaScript, AJAX, and modern web frameworks have pushed us away from displaying information in a pure and clean way. We need to go back to a better time!

    Looks at no-HTML websites

    Shit, we’ve gone back too far!

    • B-TR3E@feddit.org
      link
      fedilink
      English
      arrow-up
      10
      ·
      6 days ago

      CSS on the other hand is quite essential to separate layout from content. Which is a good thing, so I can’t really think of a reason for a “no-CSS” rule. Specifically if you can use inline styles as well but in a way more messy way.

      • howrar@lemmy.ca
        link
        fedilink
        English
        arrow-up
        3
        ·
        5 days ago

        I think the idea is that you keep the layout as simple as possible such that you don’t need any code for it, css or otherwise.

        • B-TR3E@feddit.org
          link
          fedilink
          English
          arrow-up
          0
          ·
          5 days ago

          Oh, come on. You really want some at least readable output. Things like image borders, consistently positioned images/diagrams, line breaks and page borders. Some whitespace and indentations, too. You just can’t read a couple of pages full of unformatted raw text without massive eye fatigue. I’m all for dumping JS and excessive frameworks, I’d prefer well-formed XHTML over any of that clients-side scripted crap, but totally rejecting CSS is pointless zealotry.

          • zloubida@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            5 days ago

            In a perfect world, these would be decided not server-side, but client-side by choices made by the browser users.

            But our world is not perfect.

            • B-TR3E@feddit.org
              link
              fedilink
              English
              arrow-up
              0
              ·
              edit-2
              5 days ago

              Yes , I can read books. I even read one or two of the 1200 around me. Those with the fuckpics and some of the funnier ones, like “Phänomenologie des Geistes” by Hegel. I wouldn’t have if they had been layouted using browser standards.

                • B-TR3E@feddit.org
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  5 days ago

                  That’s not even convincing pedantery. Nobody would assune that a browser’s standard style might be an RFC, IETF- or in any way official standard,

            • B-TR3E@feddit.org
              link
              fedilink
              English
              arrow-up
              1
              ·
              5 days ago

              I don’t think. You can’t prove I do! Leave me alone. You’re one of them! I knew it all the time.

      • jaybone@lemmy.zip
        link
        fedilink
        English
        arrow-up
        0
        arrow-down
        1
        ·
        5 days ago

        Separating layout from content is good. CSS is a really bad way to do it.

  • lmr0x61@lemmy.ml
    link
    fedilink
    English
    arrow-up
    9
    arrow-down
    1
    ·
    5 days ago

    I host my own website, and I decided to rewrite the JS portions in React, in order to learn the framework. Boy was it a learning experience: To do the same thing required 2-4 times the amount of code—and that’s just in the scripts, let alone the all the bloat from the packages and the bundler.

    I know this is a bit more radical than cutting out frameworks, but working with the JS ecosystem was such a pain, largely because there’s you need to piece together different software to make a stack work, which may or may not go together well. And since your stack is likely unique, good luck getting help on your problems. It made me miss Rust (albeit most languages do)—in Rust, you have Cargo for everything, and it’s beautiful. Rust has its own difficulties, but they actually feel surmountable compared to the dependency hell of JS.

    • x0x7@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      edit-2
      4 days ago

      The dependency hell of JS is caused by React. It’s an ironic turn because node gained popularity in part because it was one of the first to have a coupled package manager with a massive public contribution model, full of a billion packages that follow the unix philosophy of “everything should do only one thing, and do it well” Dependency hell would disappear if people stopped popularizing competing swiss army knives. It’s made worse by people trying to mash these swiss army knives together just to improve portfolio.

      We’ve gotten to the point where you aren’t considered a real professional unless you start even the smallest projects with maximum technical debt.

      It should never be impressive that you used a tool. If the tool made programming it easier then it’s not a mental feat. If the tool made programming it harder, then people should think you are kind of slow for using a tool that made development harder. This is why brag culture over what tools are used makes no sense. Just use tools that make life easier. If it doesn’t make life easier, stop using it.

      • lmr0x61@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        4 days ago

        That’s fair, actually: my project had 2 packages in my node_modules (not my package.json, total dependencies!) in vanilla JS, now it has well over 100. Unreal.

      • mad_lentil@lemmy.ca
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 days ago

        We’ve gotten to the point where you aren’t considered a real professional unless you start even the smallest projects with maximum technical debt.

        They’re just following the example laid out by the venture capital model, really.

    • RagingRobot@lemmy.world
      link
      fedilink
      English
      arrow-up
      3
      ·
      4 days ago

      React is probably overkill for most simple sites. You could still use JavaScript for some cool stuff without needing all the libraries and frameworks

  • moseschrute@lemmy.ml
    link
    fedilink
    English
    arrow-up
    12
    arrow-down
    1
    ·
    edit-2
    5 days ago

    Just out of curiosity what percentage of people here are using Voyager as their Lemmy client?

    Spoiler

    Voyager wouldn’t work without JavaScript… shhh don’t tell anyone

      • moseschrute@lemmy.ml
        link
        fedilink
        English
        arrow-up
        3
        ·
        edit-2
        4 days ago

        There are so many people here that hate cloud based services. And the same people also hate JavaScript. Like you realize if your app was just static JavaScript files, you could literally just download the entire site to your computer and run it? Why is JavaScript the enemy?

        JavaScript isn’t the enemy. The enshitification of technology is the enemy.

  • Rose@slrpnk.net
    link
    fedilink
    English
    arrow-up
    15
    ·
    5 days ago

    “No HTML club” is kinda going too far on the Web. If you go there you might as well start a No HTTP Club and serve stuff over Gopher and FTP.

    But we definitely need an HTML 2.0 Club.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      5 days ago

      HTML 2.0 doesn’t have tables, and tables are not so bad, even org-mode has tables.

      Since HTML 4.01 was a thing when I first saw a website:

      Being able to have buttons is good. Buttons with pictures too.

      And, unlike some people, I liked the idea of framesets. A simple enough websites could have an IRC-like chat frame to the left and the main navigable area to the right.

      And the unholy amount of specific tags is the other side of the coin for not yet using JS and CSS for everything.

      I think an “RHTML” standard as a continuation and maybe simplification of HTML 4.01 (no JS, no CSS, do dynamic things in applets, without Netscape plugins do applets with some new kind of plugins running in a specialized sandboxed VM with JIT) could be useful. Other than this there’s no need in any change at all. It’s perfect. It has all the necessary things for hypertext.

        • kazerniel@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          edit-2
          4 days ago

          I hated frames, but I do have a tiny bit of nostalgia for them because I started web design in the early '00s when they were all the craze for handmade blogs and portfolio sites :D

          And the iframes took up like 1/4 of the screen (with miniscule faint text!) while the rest of the page were large brush swoops and other graphical elements 🥹

          And the tiny navigation buttons without any text that you had to figure out from the hovered URL.

          Ah it was all so fucking unusable, but pretty xD

  • AlteredEgo@lemmy.ml
    link
    fedilink
    English
    arrow-up
    5
    arrow-down
    1
    ·
    4 days ago

    That is just stupid. How about a slighly more complex markdown.

    What I really want is a P2P archive of all the relevant news articles of the last decades in markdown like in firefox “reader view”. And some super advanced LLM powered text compression so you can easily store a copy of 20% of them on your PC to share P2P.

    Much of the information on the internet could vanish within months if we face some global economic crisis.

    • rottingleaf@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      4 days ago

      And some super advanced LLM powered text compression so you can easily store a copy of 20% of them on your PC to share P2P.

      Nothing can be that advanced and zstd is good enough.

      The idea is cool. With pure p2p exchange being a fallback, and something like trackers in bittorrent being the main center to yield nodes per space (suppose, there’s more than one such archive you’d want to replicate) and per partition (if it’s too big, then maybe it would make sense, but then some of what I wrote further should be reconsidered).

      The problem of torrents and other stuff is that people only store what’s interesting to them.

      If you have to store one humongous archive, and be able to efficiently search it, and avoid losing pieces - then, I think, you need partitioned roughly equal distribution of it over nodes.

      The space of keys (suppose it’s hashes of blocks of the whole) is partitioned by prefix so that a node would store equal amount of blocks of every prefix. And first of all the values closest to the node’s identifier (a bit like in Kademlia) should be stored of those under that space. OK, I’m thinking the first sentence of this paragraph might even be unneeded.

      The data itself should probably be in some supercool format where you don’t need to have it all to decompress only the small part you need, just the beginning with the dictionary and some interval.

      There should also be, as a separate functionality of this system, search by keywords inside intervals, so that search would yield intervals where a certain keyword is encountered. With nodes indexing continuous intervals they can decompress and responding to search requests by those keywords. Ideally a single block should be possible to decompress having the dictionary. I suppose I should do my reading on compression algorithms and formats.

      Probably search function could also involve returning Google-like context. Depending on the space needed.

      Would also need some way to reward contribution, that is, to pay a node owner for storing and serving blocks.

      • AlteredEgo@lemmy.ml
        link
        fedilink
        English
        arrow-up
        1
        ·
        edit-2
        4 hours ago

        I was thinking of the Gemini (protocol) - Wikipedia but a bit more elaborate, and yeah I’m not sure how far text compression can be pushed. But I think LLMs could be useful and help reach a critical mass of being able to download and store tons of articles.

        Torrent V2 and other official extensions Updating Torrents Via DHT Mutable Items allow some ways to do this. Like hosting a youtube channel and updating it with new videos, without any new network protocol. Well theoretically since this isn’t yet supported well in torrent clients or lib.

        I’ve been thinking how this would work for a while but it’s kind of frying my brain haha. Like a “P2P version control database” that is truly open source. For articles and blog posts, but also for metadata for manhwa, movies, tv, anime, books etc. Like anybody can download and use it and share, edit, fork it without needing to set up some complex server. Something that can’t be taken down, sold or if abandoned someone else can just pick it up and you can merge different curated versions and additions easily.

        You’d basically want a “most popular items of the past X time” that almost everybody downloads, and then the whole database split into more and more exotic or obscure items. So everybody has the popular stuff but also has to host some exotic items so they don’t get lost. And it has to be easy to use and install.

        But the whole database has to be small and compact and compressed enough that you can still easily host it on a normal HDD. It the current times with economic and political dangers lurking this would be a crucial bit of IT infrastructure.

        • rottingleaf@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 hours ago

          Gemini is just a web replacement protocol. With basic things we remember from olden days Web, but with everything non-essential removed, for a client to be doable in a couple of days. I have my own Gemini viewer, LOL.

          This for me seems a completely different application from torrents.

          I was dreaming for a thing similar to torrent trackers for aggregating storage and computation and indexing and search, with search and aggregation and other services’ responses being structured and standardized, and cryptographic identities, and some kind of market services to sell and buy storage and computation in unified and pooled, but transparent way (scripted by buyer\seller), similar to MMORPG markets, with the representation (what is a siloed service in modern web) being on the client native application, and those services allowing to build any kind of client-server huge system on them, that being global. But that’s more of a global Facebook\Usenet\whatever, a killer of platforms. Their infrastructure is internal, while their representation is public on the Internet. I want to make infrastructure public on the Internet, and representation client-side, sharing it for many kinds of applications. Adding another layer to the OSI model, so to say, between transport and application layer.

          For this application:

          I think you could have some kind of Kademlia-based p2p with groups voluntarily joined (involving very huge groups) where nodes store replicas of partitions of group common data based on their pseudo-random identifiers and/or some kind of ring built from those identifiers, to balance storage and resilience. If a group has a creator, then you can have replication factor propagated signed by them, and membership too signed by them.

          But if having a creator (even with cryptographically delegated decisions) and propagating changes by them is not ok, then maybe just using whole data hash, or it’s bittorrent-like info tree hash, as namespace with peers freely joining it can do.

          Then it may be better to partition not by parts of the whole piece, but by info tree? I guess making it exactly bittorrent-like is not a good idea, rather some kind of block tree, like for a filesystem, and a separate piece of information to lookup which file is in which blocks. If we are doing directory structure.

          Then, with freely joining it, there’s no need in any owners or replication factors, I guess just pseudorandom distribution of hashes will do, and each node storing first partitions closest to its hash.

          Now thinking about it, such a system would be not that different from bittorrent and can even be interoperable with it.

          There’s the issue of updates, yes, hence I’ve started with groups having hierarchy of creators, who can make or accept those updates. Having that and the ability to gradually store one group’s data to another group, it should be possible to do forks of a certain state. But that line of thought makes reusing bittorrent only possible for part of the system.

          The whole database is guaranteed to be more than a normal HDD (1 TB? I dunno). Absolutely guaranteed, no doubt at all. 1 TB (for example) would be someone’s collection of favorite stuff, and not too rich one.