• cybervseas@lemmy.world
    link
    fedilink
    English
    arrow-up
    4
    arrow-down
    2
    ·
    14 days ago

    I think it could have been an interesting usecase to chat with a steambot to get game recommendations.

    • Quetzalcutlass@lemmy.world
      link
      fedilink
      English
      arrow-up
      13
      ·
      edit-2
      14 days ago

      Their current recommendation engine is already a marvel and the only one I’ve ever come across that actually directs me to niche stuff I might be interested in.

    • sp3ctr4l@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      9
      arrow-down
      1
      ·
      edit-2
      13 days ago

      This is not meant to be a chatbot.

      It is meant to evaluate gaming sessions of CS2 (and/or potentially any VAC enabled game, maybe).

      Its an experimental, prototype of improving VAC’s serverside, backend analysis capabilities, to better detect cheaters and hackers.

      You don’t need kernel level access into everyone’s pcs.

      You can run analytics on what the server records as happening in the game session, to detect odd patterns and things that should be impossible.

      LLMs are … the entire thing that they do is handle massive inputs of data, and then evaluate that data.

      The part of an LLM that generates a response, in text form, to that data, is a whole other thing.

      They can also output… code, or spreadsheets, or images, or 3d models, or… any other kind of data.

      Like say, a printout of suspicious activity in a game session, with statistically derived confidence intervals and timestamps and analysis.

      The you have another, differently tuned LLM, ingest the data the first LLM produces, and turn it into something else.

      You see the ModelEvaluation and then MetaModelEvaluation?

      That looks like what they’re doing to me.

      Detailed Server Logs -> Model Evaluation -> MetaModelEvaluation.

      If you’ve ever run a dedicated multiplayer server and had to deal with hackers… you’re gonna be looking through server logs to sniff out nonsense.

      Server-side cheat detection, very oversimplified, is having automatic systems do that.

        • sp3ctr4l@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          4
          ·
          13 days ago

          I can still hardly believe that the tech industry at large just decided to broadly roll out LLM integration into essentially every element of their businesses, having just no idea what they actually do.

          Like 2 years ago now, I was figuratively pulling my hair out, reading the discussion panel schedule for Microsoft led conferences on LLMs and cybersecurity.

          Literally every topic was a different kind of way that smashing an LLM into a complex business system… increases potential failure points, broadens attack surfaces… because networked LLMs literally are security vulnerabilities.

          Not a single topic about how to use LLMs defensively, how to use them to turbocharge malware signature recognition, nothing like that.

          All just a bunch of ‘make sure you don’t do this!’ warnings, and then everyone did them anyway.