• 0 Posts
  • 13 Comments
Joined 2 years ago
cake
Cake day: June 11th, 2023

help-circle
  • Yes, I understand what you’re saying, it’s not a complicated position.
    Your position is that national reputation matters more than anything else. And most pointedly, the national reputation of your allies matters more than any other argument.

    What I’m saying is, is that the actions the US, or any other nation, took before the people currently running things were even born have no bearing on current events. Nations aren’t people, and they don’t possess a national character that you can use to try to predict their behavior or judge them.

    Would the world be justified in concluding that it’s only a matter of time before Germany does some more genocide? Before Japan unleashes atrocities across Asia?

    If you’re getting down to it, the US can’t control other nations, beyond stick and carrot means. And the US has the same right to try to keep Iran from getting nukes as Iran does in trying to get them. Because again, nations aren’t people. They don’t have rights, they have capabilities.

    And all of that’s irrelevant! Because the question is, is Israel justified in attacking Iran? The perception of hypocrisy in US foreign policy isn’t relevant to that question.


  • No, what I don’t understand is what relevance that has to this situation. The US using nukes on Japan 80 years ago doesn’t make Iran making nukes justified. It doesn’t validate Iran not having nukes. It neither strengthens nor weakens Israeli claims of an Iranian weapons program, and it doesn’t make a preemptive strike to purportedly disable them just or unjust.

    It seems like you’re arguing that the US nuked Japan and therefore Iran, a signatory to the nuclear nonproliferation treaty, is allowed to have nukes. Israel is falsely characterizing their civilian energy program, and we know this because of their backing by the US.
    It’s just a non-sequitor, particularly when there’s relevant reasons why US involvement complicated matters. .



  • The USs actions in world war two are an odd thing to bring up in this context. It was a radically different set of circumstances, 80 years ago, and none of the people involved are alive anymore.
    It’s entirely irrelevant.

    May as well point out that the US was the driver for the creation of those watchdog groups and is a leading force in nuclear disarmament. It’s just as relevant to if Iran has a nuclear weapons program or Israels justification for attacking.

    Iranian opposition to US strategic interests in the region giving the US a strong motivation to let anything that makes them weaker happen is a perfectly good thing to mention.


  • Those are entirely different. Peano developed a system for talking about arithmetic in a formalized way. This allowed people to talk about arithmetic in new ways, but it didn’t show that previous formulations of arithmetic were wrong. Godel then built on that to show the limits of arithmetic, which still didn’t invalidate that which came before.
    The development of complex numbers as an extension of the real numbers didn’t make work with the real numbers invalid.

    When a new scientific model is developed, it supercedes the old model. The old model might still have use, but it’s now known to not actually fit reality. Relativity showed that Newtowns model of the cosmos was wrong: it didn’t extend it or generalize it, it showed that it was inadequately describing reality. Close for human scale problems but ultimately wrong.
    And we already know relativity is wrong because it doesn’t match experimental results in quantum mechanics.

    Science is our understanding of reality. Reality doesn’t change, but our understanding does.
    Because math is a fundamentally different from science, if you know something is true then it’s always true given the assumptions.


  • Not quite. Science is empirical, which means it’s based on experiments and we can observe patterns and try to make sense of them. We can learn that a pattern or our understanding of it is wrong.

    Math is inductive, which means that we have a starting point and we expand out from there using rules. It’s not experimental, and conclusions don’t change.
    1+1 is always 2. What happens to math is that we uncover new ways of thinking about things that change the rules or underlying assumptions. 1+1 is 10 in base 2. Now we have a new, deeper truth about the relationship between bases and what “two” means.

    Science is much more approximate. The geocentric model fit, and then new data made it not fit and the model changed. Same for heliocentrism, Galileos models, Keplers, and Newtons. They weren’t wrong, they were just discovered to not fit observed reality as well as something else.

    A scientific discovery can shift our understanding of the world radically and call other models into question.
    A mathematical discovery doesn’t do that. It might make something more clear, easier to work with, or provide a technique that can be surprisingly applicable elsewhere.


  • We discovered one of the postulates was really interesting to fuck with.

    It’s better to say that we’ve discovered more math, some of which changes how we understand the old.

    Since Euclid, we’ve made discoveries in how geometry works and the underpinnings of it that can and have been used to provide foundation for his work, or to demonstrate some of the same things more succinctly. For example, Euclid had some assumptions that he didn’t document.

    Since math isn’t empirical, it’s rarely wrong if actually proven. It can be looked at differently though, and have assumptions changed to learn new things, or we can figure out that there are assumptions that weren’t obvious.


  • Fundamentally, I agree with you.

    The page being referenced

    Because the phrase “Wikipedians discussed ways that AI…” Is ambiguous I tracked down the page being referenced. It could mean they gathered with the intent to discuss that topic, or they discussed it as a result of considering the problem.

    The page gives me the impression that it’s not quite “we’re gonna use AI, figure it out”, but more that some people put together a presentation on how they felt AI could be used to address a broad problem, and then they workshopped more focused ways to use it towards that broad target.

    It would have been better if they had started with an actual concrete problem, brainstormed solutions, and then gone with one that fit, but they were at least starting with a problem domain that they thought it was a applicable to.

    Personally, the problems I’ve run into on Wikipedia are largely low traffic topics where the content is too much like someone copied a textbook into the page, or just awkward grammar and confusing sentences.
    This article quickly makes it clear that someone didn’t write it in an encyclopedia style from scratch.


  • A page detailing the the AI-generated summaries project, called “Simple Article Summaries,” explains that it was proposed after a discussion at Wikimedia’s 2024 conference, Wikimania, where “Wikimedians discussed ways that AI/machine-generated remixing of the already created content can be used to make Wikipedia more accessible and easier to learn from.” Editors who participated in the discussion thought that these summaries could improve the learning experience on Wikipedia, where some article summaries can be quite dense and filled with technical jargon, but that AI features needed to be cleared labeled as such and that users needed an easy to way to flag issues with “machine-generated/remixed content once it was published or generated automatically.”

    The intent was to make more uniform summaries, since some of them can still be inscrutable.
    Relying on a tool notorious for making significant errors isn’t the right way to do it, but it’s a real issue being examined.

    In thermochemistry, an exothermic reaction is a “reaction for which the overall standard enthalpy change ΔH⚬ is negative.”[1][2] Exothermic reactions usually release heat. The term is often confused with exergonic reaction, which IUPAC defines as “… a reaction for which the overall standard Gibbs energy change ΔG⚬ is negative.”[2] A strongly exothermic reaction will usually also be exergonic because ΔH⚬ makes a major contribution to ΔG⚬. Most of the spectacular chemical reactions that are demonstrated in classrooms are exothermic and exergonic. The opposite is an endothermic reaction, which usually takes up heat and is driven by an entropy increase in the system.

    This is a perfectly accurate summary, but it’s not entirely clear and has room for improvement.

    I’m guessing they were adding new summaries so that they could clearly label them and not remove the existing ones, not out of a desire to add even more summaries.



  • Eh, there’s an intrinsic amount of information about the system that can’t be moved into a configuration file, if the platform even supports them.

    If your code is tuned to make movement calculations with a deadline of less than 50 microseconds and you have code systems for managing magnetic thrust vectoring and the timing of a rotating detonation engine, you don’t need to see the specific technical details to work out ballpark speed and movement characteristics.
    Code is often intrinsically illustrative of the hardware it interacts with.

    Sometimes the fact that you’re doing something is enough information for someone to act on.

    It’s why artefacts produced from classified processes are assumed to be classified until they can be cleared and declassified.
    You can move the overt details into a config and redact the parts of the code that use that secret information, but that still reveals that there is secret code because the other parts of the system need to interact with it, or it’s just obvious by omission.
    If payload control is considered open, 9/10 missiles have open guidance control, and then one has something blacked out and no references to a guidance system, you can fairly easily deduce that that missile has a guidance system that’s interesting with capabilities likely greater that what you know about.

    Eschewing security through obscurity means you shouldn’t rely on your enemies ignorance, and you should work under the assumption of hostile knowledge. It doesn’t mean you need to seek to eliminate obscurity altogether.