zlacker

[return to "xAI joins SpaceX"]
1. gok+h4[view] [source] 2026-02-02 22:06:22
>>g-mork+(OP)
> it is possible to put 500 to 1000 TW/year of AI satellites into deep space, meaningfully ascend the Kardashev scale and harness a non-trivial percentage of the Sun’s power

We currently make around 1 TW of photovoltaic cells per year, globally. The proposal here is to launch that much to space every 9 hours, complete with attached computers, continuously, from the moon.

edit: Also, this would capture a very trivial percentage of the Sun's power. A few trillionths per year.

◧◩
2. rainsf+RA[view] [source] 2026-02-03 00:24:23
>>gok+h4
We also shouldn't overlook the fact that the proposal entirely glosses over the implication of the alternative benefits we might realize if humanity achieved the incredible engineering and technical capacity necessary to make this version of space AI happen.

Think about it. Elon conjures up a vision of the future where we've managed to increase our solar cell manufacturing capacity by two whole orders of magnitude and have the space launch capability for all of it along with tons and tons of other stuff and the best he comes up with is...GPUs in orbit?

This is essentially the superhero gadget technology problem, where comic books and movies gloss over the the civilization changing implications of some technology the hero invents to punch bad guys harder. Don't get me wrong, the idea of orbiting data centers is kind of cool if we can pull it off. But being able to pull if off implies an ability to do a lot more interesting things. The problem is that this is both wildly overambitious and somehow incredibly myopic at the same time.

◧◩◪
3. keepam+5W[view] [source] 2026-02-03 02:44:39
>>rainsf+RA
All right, so how is it that all you geniuses out here are totally right about this, but all the dullards at SpaceX and XAI, who have accomplished nothing compared to you lot, are somehow wrong about what they do every day?

I know being right without responsibility feels amazing but results are a brutal filter.

◧◩◪◨
4. cagenu+LW[view] [source] 2026-02-03 02:49:45
>>keepam+5W
spacex is one thing but xai accomplished what? the most racist csam prone llm?
◧◩◪◨⬒
5. keepam+S31[view] [source] 2026-02-03 03:52:31
>>cagenu+LW
I'm not aware of this - What's that?
◧◩◪◨⬒⬓
6. queenk+Ce1[view] [source] 2026-02-03 05:35:43
>>keepam+S31
Probably shouldn't speak to the brilliance of xAI engineers when you've never heard of their work
◧◩◪◨⬒⬓⬔
7. keepam+Rg1[view] [source] 2026-02-03 05:56:03
>>queenk+Ce1
Is whatever that is their work?
◧◩◪◨⬒⬓⬔⧯
8. queenk+lB3[view] [source] 2026-02-03 19:30:02
>>keepam+Rg1
Not just that, it's their one and only product, to my knowledge
◧◩◪◨⬒⬓⬔⧯▣
9. keepam+nI8[view] [source] 2026-02-05 04:26:45
>>queenk+lB3
Well, an LLM is a mirror, right? Maybe you were just using it wrong? Can you give any examples of what you used it for that led you to believe it's what you said?

I don't think your view is based on personal experience, but you get my, point, yes?

The feeling I get about you here is you simply dislike his companies and Musk and am enjoying seeing him get what he deserves, right? Which I think is the personal mirror of the "state feeling" behind the current official actions.

More broadly, your comments and many others like it in these threads, identify a narrow band of content with the product as a whole. And the implication being if you disagree with hatred against Musk / xAI, you must be a pervert. Which is intended as a reputational threat to intimidate people into not voicing support.

But if an LLM is used to create bad content by some, does that mean the only content it can create is bad? Does that mean that every user is using it to create bad content?

If xAI has a problem with bad content, they need better controls. I don't think these state efforts nor discourse are about the bad content. I think that issue is just a vector through which to assert pressure. I think it's because people in power want control over something that is, annoyingly to them, resisting control. And not in a way that's about "bad content", but in a way that's about inconvenient-to-them content.

[go to top]