• nyan@sh.itjust.works
    link
    fedilink
    arrow-up
    1
    ·
    3 months ago

    It matters, because it’s a tool. That means it can be used correctly or incorrectly . . . and most people who don’t understand a given tool end up using it incorrectly, and in doing so, damage themselves, the tool, and/or innocent bystanders.

    True AI (“general artificial intelligence”, if you prefer) would qualify as a person in its own right, rather than a tool, and therefore be able to take responsibility for its own actions. LLMs can’t do that, so the responsibility for anything done by these types of model lies with either the person using it (or requiring its use) or whoever advertised the LLM as fit for some purpose. And that’s VERY important, from a legal, cultural, and societal point of view.