• XLE@piefed.social
    link
    fedilink
    English
    arrow-up
    2
    ·
    14 days ago

    The language in the linked post is disinformation. AI does not “scheme,” but that’s the wording the post uses for its duration. “Scheming” implies competence from a person. This post is evidence of a dysfunctional piece of software failing to work properly, made by apparently increasingly incompetent developers.

    Upon looking a little closer, this is a fearmonger website devoted to overinflating claims of AI power while ignoring real-life present-day harms. They claim to be inspired by Sam Bankman-Fried’s Effective Altruism scam. They show pictures of beautiful beaches but fail to mention AI’s environmental harms. Their paranoid demands, if enacted, would calcify Big Tech’s monopoly on AI and help nobody affected by its abuses on the planet.

  • [deleted]@piefed.world
    link
    fedilink
    English
    arrow-up
    1
    ·
    14 days ago

    “Researchers find more defective chatbots that don’t follow instructions because glorified text completion doesn’t actually know or understand things.”

    It isn’t evade or ignoring. It is a fucking sentence autocomplete on steroids.

  • pixxelkick@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    14 days ago

    They dont lol

    Pretty much always this is just the fact cheaper, especially free, chatbots, have very limited context windows.

    Which means the initial restrictions you set like “dont do this, dont touch that” etc get dropped, the LLM no longer has them loaded. But it does have in the past history the very clear and urgent directives of it trying to do this task, its important, so it’ll do whatever it autocompletes its gotta do to accomplish the task. And then… fucks something up.

    When you react to their fuck up, it *reloads the context back in

    So now the LLM has in its history just this:

    1. It doing a thing against the rules
    2. The user yelling at it
    3. The users now getting loaded after that on top

    So now the LLM is going to autocomplete its generated text on top being very apologetic and going on about how it’ll never happen again.

    Thats all there is to it.

    • MalReynolds@slrpnk.net
      link
      fedilink
      English
      arrow-up
      0
      ·
      14 days ago

      Cheap fuckers cheaping out, shocker (context is (V)RAM). AI speedrunning enshittification, who’d of thunk.

      • pixxelkick@lemmy.world
        link
        fedilink
        English
        arrow-up
        0
        ·
        14 days ago

        Uh… no its just the free models being free, theyre lower cost intentionally to provide free options for people who dont wanna pay subscription fees.

        (context is (V)RAM)

        Eh sort of, its more operating costs, the larger the context size the more expensive the model is to run, literally in terms of power consumption.

        Keep in mind we are on the scale of fractions of cents here, but multiply that by millions of users and it adds up fast.

        But the end result is that the agent will fuck stuff up, and will even quickly /forget/ it fucked that up if you dont catch it asap

        A lot of them have a context window that can be wiped out within like, 2 minutes of steady busywork…

        • Log in | Sign up@lemmy.world
          link
          fedilink
          English
          arrow-up
          0
          ·
          13 days ago

          I love how your response to the catastrophic results of stupidly trusting ai is “pay more money to ai companies”.

          Sane person’s response: don’t trust llms.

          • pixxelkick@lemmy.world
            link
            fedilink
            English
            arrow-up
            0
            ·
            13 days ago

            What are you talking about.

            No? I never said that.

            I just explained /why/ it happened, I literally nowhere in my post said, or implied, someone should pay for more expensive models. What are you smoking?

            You just have to be aware they have very short memory when using a cheap model and assume anything you wrote 1 minute ago has already left its memory, which is why they produce pretty dumb output if you try and depend on that… so… dont depend on that.

            • Log in | Sign up@lemmy.world
              link
              fedilink
              English
              arrow-up
              0
              ·
              13 days ago

              Everyone else who has any sense: llms are shit and you shouldn’t trust them with executive power.

              You: just the cheap ones.

              Me: no, all of them. What kind of lunatic trusts control of anything important to a fundamentally stochastic process?

              • pixxelkick@lemmy.world
                link
                fedilink
                English
                arrow-up
                0
                ·
                13 days ago

                You: just the cheap ones

                I never said that. I just said that the cheap ones are especially shitty.

                People on this site really lack reading comprehension it seems.

                • Log in | Sign up@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  6 days ago

                  no its just the free models…

                  You just have to be aware… when using a cheap model

                  You: just the cheap ones

                  I never said that.

                  Ohhhhhhhhh ok yes of course you never said or implied that. Not your repeated message at all. And yet you can’t keep away from adressing your criticism towards free or cheap LLMs! It’s like your subtext or your underlying belief is that of you just pay big tech enough money and they can just build a big enough set of server farms, it’ll be ok. No, it will not be ok and the enshittification has begun from an already shitty base point.

                  All LLMs are shit, the cheap and free ones are indeed just easier to spot as generating shit, if you ask them about things you know about. But you have to accept that they’re ALL shit and STOP making get out clauses for the expensive ones by firing your criticisms exclusively at the cheap or free ones.

                  Giving ANY LLM executive power over your data is A BIG MISTAKE because you’re putting your data in the control of something which operates, at its heart, as a random number generator. They’re trained to sound right. People trust them because they sound right. This is a fundamental error.