

YMMV, I guess? I think it runs incredibly well, especially with Lumen enabled, given the sheer amount of stuff in-game. FPS is way higher than comparable looking games without thousands of player built objects, and the lighting is beautiful.


YMMV, I guess? I think it runs incredibly well, especially with Lumen enabled, given the sheer amount of stuff in-game. FPS is way higher than comparable looking games without thousands of player built objects, and the lighting is beautiful.


To illustrate what I mean more clearly, look at the top comments/replies for the NASA Artemis posts, as an example.
…It’s basically all conspiracy theorists, and government skeptics.
Twitter’s focusing the Artemis posts on them because it’s what they want to see, and most engaging for them.
In the EFF’s case, I’m not just talking about Musk’s influence. The algorithm will only show the EFF to users who would be highly engaged by it. E.g., angry skeptics who wouldn’t be swayed by the EFF anyway, or fans who already agree with the EFF. It’s literally not going to show the EFF to people who need to see it, as Twitter’s metrics would show it as unengaging.
This is the “false image” I keep trying to dispel. Twitter is less and less an “even spread” of exposure like people think it is, like it sort of used to be, more-and-more a hyper focused bubble of what you want to hear, and only what you want to hear. All the changes Musk is making are amplifying that. Maybe that’s fine for some orgs, but there’s no point in the EFF staying in that kind of environment, regardless of ethics.


I feel like the EFF’s messaging is just not going to get through to anyone still on Twitter.
Remember, it’s not a fair forum; it’s an algorithm. And it’s not going to show the EFF to users who need to see it.


Hardly. Power costs are trivial to them at the moment, and a server hardware bottleneck would just consolidate power to the big few that can afford it (which is what they want).


The source for the original report (The Free Press) seems questionable, per Wikipedia’s own discussion on it:
https://en.wikipedia.org/wiki/Wikipedia:Reliable_sources/Noticeboard/Archive_397#The_Free_Press
Also, please try to at least link the original article in the description: https://www.thefp.com/p/why-the-vatican-and-the-white-house


More reporting on this. Whole article is short and worth a read, but:
https://www.axios.com/2026/04/08/lebanon-attacks-israel-iran-ceasfire
The U.S. official said the White House is not currently concerned that the situation in Lebanon would cause the ceasefire with Iran to collapse.
Hezbollah said it had a right to respond to Israeli’s attack.


Uh oh.
Also, I found some interesting trackers:
https://www.hormuztracker.com/
https://www.shiptraffic.net/2001/04/hormuz-strait-ship-traffic.html
No ships going through yet. And one had a very interesting nugget of info on insurance:
Insurance withdrawal makes transit commercially unviable independent of physical conditions. Even if the security situation improves tomorrow, insurers take weeks to reinstate coverage. Ships cannot sail without P&I cover — ports won’t accept them, banks won’t finance cargo, charterers won’t book them. Three weeks in, no insurer has shown willingness to reinstate. This is why the disruption will persist well beyond any resolution.


To add to what others said:
LPDDRX is used in some inference hardware. The same stuff you find in laptops and smartphones.
Also, the servers need a whole lot of regular CPU DIMMs since they’re still mostly EPYC/Xeon severs with 8 GPUs in each. And why are they “wasting” so much RAM on CPU RAM that isn’t really needed, you ask? Same reason as a lot of AI: it’s immediately accessible, already targeted by devs, and AI dev is way more conservative and wasteful than you’d think.
Same for SSDs. Regular old servers (including AI servers) need it too. In a perfect world they’d use centralized storage for images/weights with near-“diskless” inference/training servers. Some AI servers do this, but most don’t.
Basically, the waste is tremendous, for the same reason they use cheap gas generators on-site: it’s faster-to-market.


There some some very efficient games using UE5, like Satisfactory.
On the contrary, I’m afraid of custom engine games. Even if they ultimately turn out okay, the dev hell required to get them there often sinks the game. See: ME: Andromeda, Cyberpunk 2077. And Distant Worlds 2 (even though it wasn’t technically fully custom).
IMO the best path is choosing the game engine for your niche. As an example, Cryengine was practically made for KCD2’s European forests and medieval towns. Larian’s Divinity engine is literally made for a D&D-type game like BG3.


No.
But it certainly has some strong indirect effects on my buying decisions.


Came off as abrasive, but your point stands.
Political support for the alt-right is booming across Europe. The last thing you European folks need to do is raise your nose at the American tire fire. Deal with your own, before its too late.


What would happen if a tanker was destroyed and spilled out there?


Imagine if you showed this to someone in ~2009.


Even not-fully-reproducible open-weights models are extremely important because they’re poison to OpenAI, and they know it. It makes what they’re trying to commodify and control effectively free and utilitarian.
But there are fully open models, too, with public training data.


It’s anticompetitiveness.
They want to squash open models, and anyone too small to comply with this.
I say this in every thread, but the real AI “battle” is open-weights ML vs OpenAI style tech bro AI. And OpenAI wants precisely no one to realize that.
Not me.
I wanna be there to report every attribution-cropped post I can find, at least in subs where it’s applicable. And repost it without the crop, then tell everyone to watch out for your posts.


Ughhh, I could go on forever, but to keep it short:
Tech bro enshittification: https://old.reddit.com/r/LocalLLaMA/comments/1p0u8hd/ollamas_enshitification_has_begun_opensource_is/
Hiding attribution to the actual open source project it’s based on: https://old.reddit.com/r/LocalLLaMA/comments/1jgh0kd/opinion_ollama_is_overhyped_and_its_unethical/
A huge support drain on llama.cpp, without a single cent, nor a notable contribution, given back.
Constant bugs and broken models from “quick and dirty” model support updates, just for hype.
Breaking standard GGUFs.
Deliberately misnaming models (like the Deepseek Qwen distills and “Deepseek”) for hype.
Horrible defaults (like ancient default models, 4096 context, really bad/lazy quantizations).
A bunch of spam, drama, and abuse on Linkedin, Twitter, Reddit and such.
Basically, the devs are Tech Bros. They’re scammer-adjacent. I’ve been in local inference for years, and wouldn’t touch ollama if you paid me to. I’d trust Gemini API over them any day.
I’d recommend base llama.cpp or ik_llama.cpp or kobold.cpp, but if you must use an “turnkey” and popular UI, LMStudio is way better.
But the problem is, if you want a performant local LLM, nothing about local inference is really turnkey. It’s just too hardware sensitive, and moves too fast.


I’d be fine if these studios just…vanished
Outside mobile, it would honestly be a boon to gaming.
Think how much attention and funding they suck up from smaller studios/publishers making great games. Folks have no idea what they’re missing.


Also, for any interested, desktop inference and quantization is my autistic interest. Ask my anything.
I don’t like Gemma 4 much so far, but if you want to try it anyway:
On Nvidia with no CPU offloading, watch this PR and run it with TabbyAPI: https://github.com/turboderp-org/exllamav3/pull/185
With CPU offloading, watch this PR and the mainline llama.cpp issues they link. Once Gemma4 inference isn’t busted, run it in IK or mainline llama.cpp: https://github.com/ikawrakow/ik_llama.cpp/issues/1572
If you’re on an AMD APU, like a Mini PC server, look at: https://github.com/lemonade-sdk/lemonade
On an AMD or Intel GPU, either use llama.cpp or kobold.cpp with the vulkan backend.
Avoid ollama like it’s the plague.
Learn chat templating and play with it in mikupad before you use a “easy” frontend, so you understand what its doing internally (and know when/how it goes wrong): https://github.com/lmg-anon/mikupad
But TBH I’d point most people to Qwen 3.5/3.6 or Step 3.5 instead. They seem big, but being sparse MoEs, they can run quite quickly on single-GPU desktops: https://huggingface.co/models?other=ik_llama.cpp&sort=modified
Let’s set all ethics and politics and humanitarianism aside.
From a selfish, strategic, “NonCredibleDefense” kind of perspective, Ukraine is an incredible military power. Militaries around the world should be on their knees begging for their experience, if not their units. With, for instance, shooting down drones.
And it’s absolutely mind boggling to me that Western powers don’t see that. Think how Ukraine could help them militarily all over the world… If I were Xi, I wouldn’t invade Taiwan with a bigger chunk of Ukrainians training them.