Those million shares are worth about $5.7 billion today ($238 stock, 1:24 cumulative split since then).
[1]: https://archive.ph/BNLiX/f3757388e1ee7008f0bad22261c625f1dcf...
Does anyone know where I could find out more about this? The prevailing narrative for many years has been that the Rampage GPU would have provided a technological edge for a few years.
Then I bought a 3DFx Voodoo card and started using Glide and it was night and day. I had something up the first day and every day thereafter it seemed to get more and more capable. That was a lot of fun.
In my opinion, Direct X was what killed it most. OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL (which they didn't control) to program games if they could. After about 5 years (Direct X 7 or 8) it had reached feature parity but long before that the "co marketing" dollars Microsoft used to enforce their monopoly had done most of the work.
Sigh.
But you could argue that the Rampage drivers weren’t optimized.
Comparison pics: https://www.marky.ca/3d/quake2/compare/content.html
The (leaked?) datasheets and miscellaneous information for the GPUs seem to be widely available, but that's still quite impressive for a single person; on a similar level to making your own motherboard: https://news.ycombinator.com/item?id=29273829
Wow, this sounds like those little cases where a few different decisions could have easily led us down into an alternate parallel world :)
Can someone expand on why NURBs didn't/don't win out against polygons?
Could this be like AI/ML/VR/Functional Programming, where the idea had been around for decades but could only be practically implemented now after we had sufficient hardware and advances in other fields?
I totally agree that the incremental innovations observed in earlier GPU platforms felt much, much more ‘obvious’ though.
It’s as if the ‘wow factor’ of graphics hardware doesn’t scale at the same rate as density.
Or perhaps releases were more spread out than they are today (compared to the annual release cycle expected today) making the jumps more obvious.
[1] https://www.researchgate.net/figure/Comparison-of-NVIDIA-gra...
Sometimes, like with CUDA, they just have an early enough lead that they entrench.
Vile player. They're worse than IBM. Soulless & domineering to the max, to every extent possible. What a sad story.
I remember being in high school and seeing the butter smooth high frame rates (for the time) being like wow.
Brian Hook was the 3dfx engineer who was the original architect of Glide. He went on to work for id Software in 1997.
Michael Abrash wrote the first 3d hardware support for Quake, while working for id Software, but it wasn't for 3dfx it was for the Rendition Verite 1000 and released as vQuake.
John Carmack was, of course, the lead programmer of Quake.
Hook, Abrash, and Carmack ended up working at the same company again at Oculus VR (and then Meta).
The problem was the 3D version looked/looks just plain BLURRY.
Something about my brain is able to understand when they see a pixelated video game that it's an artifice, and give it a ton of benefit of the doubt.
I can't say it is in any way more realistic, but it feels more IMMERSIVE.
Whereas with the higher resolution one, it's an uncanny valley. Everything is too smooth and Barbara Walters-y.
Again, this is my memory of how I perceived it at the time, this isn't modern rose coloured glasses because obviously technology has improved dramatically.
I was acutely aware of the various 3D API issues during this time and this rings very true.
3dfx Oral History Panel with Ross Smith, Scott Sellers, Gary Tarolli, and Gordon Campbell (Computer History Museum)
That was for 2D, bigger faster 3D enables new sorts of games so that market has been growing for far longer.
I suppose you could direct rasterize a projected 3D curved surface, but the math for doing so is hideously complicated, and it is not at all obvious it’d be faster.
Vulkan is just as bad into this regard, with complexity turned to eleven. No wonder people call it a GPU hardware abstraction API, not a graphics API.
And on the Web they couldn't have better idea than throw away all the existing GLSL, to replace it with a Rust inspired shading language.
>This tale brings up many “what ifs.”
What if 3dfx had realized early on that GPUs were excellent general purpose linear algebra computers, and had incorporated GPGPU functionality into GLide in the late 90s?
Given its SGI roots, this is not implausible. And given how NVidia still has a near stranglehold on the GPGPU market today, it’s also plausible that this would have kept 3dfx alive.
The problem in case of CUDA isn't just that NVIDIA was there early, it's that AMD and Khronos still offer no viable alternative after more than a decade. I've switched to CUDA half a year ago after trying to avoid it for years due to being proprietary. Unfortunately I discovered that CUDA is absolutely amazing - It's easy to get started, developer friendly in that it "just works" (which is never the case for Khronos APIs and environments), and it's incredibly powerful, kind of like programming C++17 for 80 x 128 SIMD processors. I wish there was a platform independent alternative, but OpenCL, Sycl, ROCm aren't it.
To play games I later bought a PC Voodoo 3 which you could flash with a Mac-version ROM. Much cheaper than buying an actual Max version.
Unreal was incredible. The whole card was incredible.
Later I put a Linux distro on the computer too, and I needed a custom patch to the kernel (2.2.18, I think) to get the Voodoo drivers working for accelerated 2D and 3D rather than the software frame buffer. It was incredible to see a Gnome 1.4 or KDE 2 desktop in high res perform really well.
It was also pretty buggy. Due to experimental drivers in both Linux and Mac, I assume. The weirdest was that sometimes a bug in Linux would dump whatever was in memory on the screen, which softly after booting from Mac to Linux could result in Mac OS 9 windows appearing on a Linux desktop. You obviously couldn’t interact, it would happen in a kernel panic type event where the computer would freeze with an odd mix of stuff on the screen.
Was a fun intro to Linux, not getting a nice windowing environment till after I’d learnt to patch and compile a kernel.
Back then it was normal to play with more OSs it seems like. That computer had Mac, Linux, BeOS and more on it.
https://www.quakewiki.net/profile-retro-interview-brian-hook...
https://www.youtube.com/watch?v=ooLO2xeyJZA
https://www.youtube.com/watch?v=JIOYoZGoXsw
https://www.youtube.com/watch?v=43qp2TUNEFY
The print ads were similarly incredible:
http://www.x86-secret.com/pics/divers/v56k/histo/1999/commer...
https://www.purepc.pl/files/Image/artykul_zdjecia/2012/3DFX_...
https://fcdn.me/813/97f/3d-pc-accelerators-blow-dryer-ee8eb6...
Nvidia seems utterly disinterested in learning these lessons, decades in now: they just gets more and more competitive, less and less participatory. It wild. On the one hand they do a great job maintaining products like the Nvidia Shield TV. On the other hand, if you try anything other than Linux4Tegra (l4t) on most of their products (the Android devices wont work at all for anything but Android btw) it probably wont work at all or will be miserable.
Nvidia has one of the weirdest moats, of being open source like & providing ok-ish open source mini-worlds, but you have to stay within 100m of the keep or it all falls apart. And yea, a lot of people simply dont notice. Nvidia has attracted a large camp-followers group, semi-tech folk, that they enable, but who dont really grasp the weird limited context they are reserved on.
[0] https://cs.nyu.edu/~perlin/courses/fall2005ugrad/phong.html
[1] https://www.dgp.toronto.edu/public_user/lessig/talks/talk_al...
I recall ATI and Matrox both failing in this regard despite repeated promises.
The article touches on this a bit, but one of the quirky things about the original Voodoo and Voodoo 2 were that they lacked 2D entirely. You had to use a short VGA passthrough cable to some other 2D card. This also meant that they only supported fullscreen 3D, since 2D output was completely bypassed while the 3dfx card was in use.
The Voodoo 3 finally came with 2D, but I think I jumped ship to Nvidia by then.
Source: Was burned by ATI, Matrox, 3dlabs before finallly coughing up the cash for Nvidia.
Why the heck this image claimed being taken from [Russian site] if that image is from Buyee?[0]
It has everything to do with Vulkan, givent that the same organisation is handling it, and had it not been for AMD's Mantle, they would probably be discussing what OpenGL vNext should look like.
And! Still available today: DOSBox-X has Voodoo1 support built-in (or host pass-through glide support), Open Watcom C++ works well in DOSBox and will compile most examples.
Opengl was stagnating at the time vendors started a feature wars. On opengl you can have vendor specific extensions, because it was meant for tightly integrated hardware and software. Vendors started leaning heavily on extensions to one up each other.
The cronus group took ages to get up and standardize modern features
By that time gl_ext checks became nightmarishly complicated and cross compatibility was further damaged by vendors lying about their actual gl_ext support, where drivers started claiming support for things the hardware could do, but using the ext causes the scene to not look right or outright crash
Developers looked at that and no wonder they didn't want to take part in any of it
This all beautifully exploded a few year later when compiz started taking a foothold which required this or that gl_ext and finally caused enough rage to get cronus working at bringing back under control the mess
By that time ms were already at directx 9, you could use xlna to target different architectures, and it brought networking and io libraries with it making it a very convenient development environment
*this is all a recollection from the late nineties early 2k and it's by now a bit blurred, it's hard to fill in the details on the specific exts Nvidia was the one producing the most but it's not like the blame is on them, Maxtor and ATI wonky support to play catch up was overall more damaging. Ms didn't need to really do much to win hearts with dx.
Diablo 2: Resurrected allows you to switch between the new 4k60fps graphics and the original - it's fascinating to see the difference.
That seems like a bit of anachronistic Silicon Valley mythology. Wasn't Minnesota something like the "Silicon Valley" of mainframes at the time? There were several then-major computer companies there: CDC, Cray, Honeywell, IBM, Univac, etc.
You tend to have different priorities in those ages, I guess.
It's a dark memory, sure, but probably the packaging somehow made me get attached to a "stupid computer part" (not my words) and that's interesting.
[0]: https://en.wikipedia.org/wiki/1999_%C4%B0zmit_earthquake
If you can split the screen into 64 equal chunks, there's nothing except silicon real estate stopping you splitting it into 128, or 256, or 2048. Think about how SLI worked, in the Voodoo II olden days.
1: https://www.tomshardware.com/reviews/3d-accelerator-card-rev...
I never again experienced such a phenomenal gap in visible performance. Happy times.
I had my share of fun with DirectMusic.
I had a Riva TNT 2 card. The only "accelerated" thing it could do in X was DGA (direct graphics access). Switched to Ati and never looked back. Of course you could use the proprietary driver. If you had enough time to solve instalation problems and didn't mind frequent crashes.
Good thing nice graphics do not equal good games. My favourite multiplayer FPS games I prefer in glorious picmip 5 detail.
nvidia being the only viable solution for 3d on linux is a bit of an exaggeration imo (source: i did it for 5 years), but that was a long time ago: we have amdgpu, which is far superior to nvidia’s closed source driver.
ps: the nv1 "mis-step" was really interesting. They somehow quickly realigned with the nv3, which was quite a success IIRC.
I'm not really convinced curves are that useful as a modelling scheme for non-CAD/design stuff (i.e. games and VFX/CG): while you can essentially evaluate the limit surface, it's not really worth it once you start needing things like displacement that actually moves points around, and short of doing things like SDF modulations (which is probably possible, but not really artist-friendly in terms of driving things with texture maps), keeping things as micropolygons is what we do in the VFX industry and it seems that's what game engines are looking at as well (Nanite).
Aureal produced revolutionary audio cards, with real 3D audio - and I mean real. Close your eyes and you would track the sounds in the space around you with your eyes, from your hearing. It changed everything. I played HL with this, and CS too.
I may be wrong, but my understanding is Creative sued them into bankruptcy, bought what remained, and never used that technology. I have never forgiven Creative for this.
Compared to the official Nvidia driver.
> If you had enough time to solve instalation problems and didn't mind frequent crashes
I used Nvidia GPUs from ~2001 to ~2018 on various machines with various GPUs and i never had any such issues on Linux. I always used the official driver installer and it worked perfectly fine.
Quake used miniGL, not full OpenGL.
One aspect of this story I've never seen covered is how Nvidia managed as a quiet dark horse to come from behind and crush 3dfx in a few years with the TNT and TNT2. The article talks about the GeForce 256, but 3dfx crown was stolen before that.
D3D was a terribly designed API in the beginning, but it caught up fast and starting at around DX7 was the objectively better API, and Microsoft forced GPU vendors to actually provide conforming and performant drivers.
It went so fast going from having no 3D acceleration to having more 3D accelleration than you could imagine. It died fast too when ATI and NVIDIA became the only ones left, which is still true to this day.
Another reason might have been: early 3D games usually implemented a software rasterization fallback. Much easier and faster to do for triangles than nurbs.
Maybe being 1999 it was just a little bit too late to still fully appreciate 3dfx and modern day D3D and OpenGL took over around that time, so I just missed the proper Voodoo era by a hair.
Note that by OpenGL here I meant OpenGL using the Riva TNT (I assume the Voodoo card drivers must have been called Glide or 3DFx in the settings). I've always seen D3D and OpenGL existing side by side, performing very similarly in most games I played, and supporting the same cards, with GeForce cards etc that came later. I mainly game using Wine/Proton on Linux now by the way.
When the Voodoo 2 came out, I couldn’t find a vendor selling them for the Mac. I’m not even sure anyone did.
I bought a Voodoo 2 card that worked with a patched version of Mesa but it was slow.
I managed to find a driver on some peer sharing thing (Hotline??). I’ve no idea of its origin but it worked fine.
I don't know how the D3D design process worked in detail, but it is obvious that Microsoft had a 'guiding hand' (or maybe rather 'iron fist') to harmonize new hardware features across GPU vendors.
Over time there have been a handful of 'sanctioned' extensions that had to be activated with magic fourcc codes, but those were soon integrated into the core API (IIRC hardware instancing started like this).
Also, one at the time controversial decision which worked really well in hindsight was that D3D was an entirely new API in each new major version, which allowed to leave historical baggage behind quickly and keep the API clean (while still supporting those 'frozen' old D3D versions in new Windows versions).
False, 3dfx killed themselves. Their graphics chips and their architecture became quickly outdated compared to the competition. Their latest efforts towards the end of their life resorted to simply putting more of the same outdated and inefficient chip designs on the same board leading to monstrosities GPUs with 4 chips that came with their own power supply. Nvidia and ATI were already eating their lunch.
Also, their decision to build and sell graphics cars themselves directly to consumers, instead of focusing on the chips and letting board partners build and sell the cards was another reason for their fall.
Their Glide API alone would not be enough to save them from so many terrible business and engineering decisions.
>OpenGL was well supported on the Voodoo cards and Microsoft was determined to kill anyone using OpenGL
Again, false. OpenGL kinda kiled itself on the Windows gaming scene. Microsoft didn't do anything to kill OpenGL on Windows. Windows 95 supported OpenGL just fine, as a first class citizen just like Direct3D, but Direct3D was easier to use and had more features for windows game dev, meaning quicker time to market and less dev effort, while OpenGL drivers from the big GPU makers still had big quality issues back then and OpenGL progress was stagnating.
DirectX won because it was objectively better than OpenGL for Windows game dev, not because Microsoft somehow gimped OpenGL on Windows, which they didn't.
Even though Wikipedia classifies it as vaporware, there are prototype cards and manuals floating around showing that these cards were in fact designed and contained programmable pixel shaders, notably:
- The Pyramid3D GPU datasheet: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tr2520...
- The pitch deck: http://vgamuseum.info/images/doc/unreleased/pyramid3d/tritec...
- The hardware reference manual: http://vgamuseum.info/images/doc/unreleased/pyramid3d/vs203_... (shows even more internals!)
(As far the companies go: VLSI Solution Oy / TriTech / Bitboys Oy were all related here.)
They unfortunately busted before they could release anything, due to a wrong bet in memory type choice (RDRAM, I think) and letting their architecture rely on that, then running out of money, perhaps some other problems. In the end their assets were bought by ATI.
As for 3dfx, I would highly recommend watching the 3dfx Oral History Panel video from the Computer History Museum with 4 key people involved in 3dfx at the time [2]. Its quite fun as it shows how 3dfx got ahead of the curve by using very clever engineering hacks and tricks to get more out of the silicon and data buses.
It also suggests that their strategy was explicitly about squeezing as much performance out of the hardware, and making sacrifices (quality, programmability) there, which made sense at the time. I do think they would've been pretty late to switch to the whole programmable pipeline show, for to that reason alone. But who knows!
There were a few earlier attempts that failed for one reason or another, mostly poor performance. I think there was a French accelerator card that I can't remember the name of right now.
Now I have an Intel iGPU and don’t care. And I can afford to eat better :)
Like here, the V3 seems to have a pretty handy lead in most cases https://www.anandtech.com/show/288/14 - but that's a TNT2 (not TNT2 Ultra) and it's all in 16 bit colour depth (not supported by the V3).
It was certainly an interesting time, and as a V3 owner I did envy that 32 bit colour depth on the TNT2 and the G400 MAX's gorgeous bump mapping :D
The first game that I launched was Need for Speed 2 SE, who supported 3dfx, and oh boy o boy, the difference was night and day, it blew my mind at the time. The textures, shadows, look and feel overall was the best thing that I ever seen. It also has exclusive features for 3dfx like bugs splatters on the screen in some parts.
A leap forward on gaming to my teenager eyes at the end of the 90s.
It's like having a basic VM, high-level languages are compiled to the intermediate representation where things are simpler and various optimizations can be applied.
Edit, obviously this is all subjective, and it's about fond memories! :)
This is interesting. I have always wondered if that is a viable approach to API evolution, so it is good to know that it worked for MS. We will probably add a (possibly public) REST API to a service at work in the near future, and versioning / evolution is certainly going to be an issue there. Thanks!
For years it was so hard to find LCDs that came close to the resolution I had on my CRT for a reasonable price.
That is opposed to say, today's brand of insincere feel-good corporate bullshit marketing.
> By 1999 it was clear that Microsoft had no intention of delivering Low Level; although officially working on it, almost no resources were dedicated to actually producing code.
No kidding...
Also the CEO of SGI in the late 90s was an ex-Microsoft and bet heavily on weird technical choices (remember the SGI 320 / 540? I do) that played no small role in sinking the boat. Extremely similar to the infamous Nokia suicide in the 2010s under another Microsoft alumni. I think the similarity isn't due to chance.
https://www.tomshardware.com/reviews/nvidia,87.html
It’s a great article that predicted a lot of things.
Side note: love that an article 25-years old is still accessible.
I can run on a modern system at modern resolutions but it still looks like it did back in the day.
Sometimes adding effects and filtering gives a strange broken feeling where the art on the screen no longer matches what is in my head or something.
I can’t handle upscaling old console games either. Something is always weird.
The entire point of Vulkan is that it’s a hardware abstraction. It was invented to offer an API around low level hardware operations rather than the typical approach of graphics libraries which come from the opposite direction.
Everyone had monitors that could do 1024x768 (usually up to 1600x1200) where as 8K monitors are much less ubiquitous today in comparison.
It was also the first PC I ever installed Linux on. My dad would not let me do such a risky operation as dual booting Linux on the family computer. I don’t even remember what distro at this point.
I thought I was buying something better than a Voodoo and got it at release before there were any reviews. Lesson learned.
It is quite insane. Now, getting to use all of them is difficult, but certainly possible with some clever planning. Hopefully as the tech matures we'll see higher and higher utilization rates (I think we're moving as fast as we were in the 90's in some ways, but some parts of how big the industry is hides the absolutely insane rate of progress. Also, scale, I suppose).
I remember George Hotz nearly falling out of his chair for example at a project that was running some deep learning computations at 50% peak GPU efficiency (i.e. used flops vs possible flops) (locally, one GPU, with some other interesting constraints). I hadn't personally realized how hard that is apparently to hit, for some things, though I guess it makes sense as there are few efficient applications that _also_ use every single available computing unit on a GPU.
And FP8 should be very usable too in the right circumstances. I myself am very much looking forward to using it at some point in the future once proper support gets released for it. :)))) :3 :3 :3 :))))
Nokia would had killed itself either way, with Elop it still tried to flop.
Every Nokia fanboy cries about EEE, but blissfully forgets what a turd was 5800 Xpress Music, which came half a year later than iPhone 3G.
Whatever. In late 1996, I got a PowerMac 8500/180DP (PowerPC 604e) and a 1024x768 monitor. The 8500 didn't even have a graphics card, but had integrated/dedicated graphics on the motherboard with 4MB VRAM (also S-video and composite video in and out). It came bundled with Bungie's Marathon[1] (1994) which filled the screen in 16-bit color.
There is also the issues of development experience in provided SDKs versus what others provide, and apparently now Khronos rather adopt HLSL than trying to improve GLSL.
Because the article is kinda hard to follow for the uninitiated. Heavy name and jargon dropping haha
My current employer has fairly recently hired a ton of ex-Google/Microsoft into upper management. They’re universally clueless about our business, spending most of their time trying to shiv one another for power.
FP8 is really only useful for machine learning, which is why it is stuck inside tensor cores. FP8 is not useful for graphics, even FP16 is hard to use for anything general. I’d say 100 Tflops is more accurate as a summary without needing qualification. Calling it “4 petaflops” without saying FP8 in the same sentence could be pretty misleading, I think you should say “4 FP8 Petaflops”.
Edit: @throwawayx38: I 100% agree with you! Thanks for your reply.
(1) Someone designs something clearly superior to other technology on the market.
(2) They reason that they have a market advantage because it's superior and they're worried that people will just copy it, so they hold it close to the chest.
(3) Inferior technologies are free or cheap and easy to copy so they win out.
(4) We get crap.
... or the alternate scenario:
(1) Someone designs something clearly superior to other technology on the market.
(2) They understand that only things that are more open and unencumbered win out, so they release it liberally.
(3) Large corporations take their work and outcompete them with superior marketing.
(4) We get superior technology, the original inventors get screwed.
So either free wins and we lose or free wins and the developers lose.
Is there a scenario where the original inventors get a good deal and we get good technology in the end?
Yes it had huge systemic issues. Structural problems, too many departments pumping out too many phones with overlapping feature sets, and an incoherent platform strategy.
But Elop flat-out murdered it with his burning platforms memo and then flogged the scraps to the mothership. It came across as a stitch-up from the word go.
RiM basically killed off a big chunk of the Nokia market, as it did for Windows CE as well.
By the time the original iPhone came out, Nokia hadn’t really put out anything to capture the mindshare in a while. They were severely hampered by the split of their Symbian lineup (S30,40,60,90) and unable to adapt to the newest iteration of smartphones.
They’d never have been able to adapt to compete without throwing out Symbian, which they held on to and tried to reinvent. Then there was the failure of MeeGo.
Nokia would have been in the same spot they’re in today regardless of Microsoft. They’d be just another (sadly) washed up Android phone brand. Just like their biggest competitors at the time: Sony Ericsson and Motorola.
But at least we got a lot of Qt development out of it.
It's not a nest, he's is mostly the only one.
But they’re significantly easier to target (less feature splitting) and much more ergonomic to develop with.
The MGA Millennium had unprecedented image quality, and its RAMDAC was in a league of its own. The G200 had the best 3D image quality when it was released, but it was really slow and somewhat buggy outside of Direct3D where it shined. However, even with my significant discount and my fanboyism, when the G400 was released, I defected to NVIDIA since its relative performance was abysmal.
The way 3D rendering is done these days is drastically different from the days of OpenGL. The hardware is architecturally different, the approach people take to writing engines is different.
Also most people don’t even target the graphics API directly these days and instead use off the shelf 3D engines.
Vulkan was always intended to be low level. You have plenty of other APIs around still if you want something a little more abstracted.
It wasn't until I went to some cybercafe where they had Quake II, among others, installed for network play in computers with one of those 3dfx cards, that I actually saw what an accelerator card could turn a game into, regardless of speed, and I decided that I needed one of those.
Of course the card linked above is a server card, not a desktop or workstation card optimized for rendering.
What is that Megatron chat in the advertisement? Does it refer to a loser earth destroying character from Transformers? Rockfart?
Unless they had a dedicated harddrive, they would also need to resize existing fat32/ntfs partitions and add a couple of new partitions for Linux. This process had a certain risk.
In practice for DirectX you just use the header files that are in the SDK.
You know why it was 'Xpress Music'? Because Nokia was years late for a 'music phone'. Even Moto had E398 and SE had both music and photo. By 2009 Nokia had a cheap line-up for the brand zealots (eaten up by Moto C-series and everyone else), a couple of fetishist's phones (remember those with floral pattern and 8800?) and.. overpriced 'commucators' with subpar internals (hell, late PDAs on Xscale had more RAM and CPU power) and incompatible with anything, including themselves, mess of Symbian.
Elop not only allowed MS to try the waters with mobiles, but actually saved many, many workplaces for years. Alternative for that would had been a bankrupcy around 2013.
I did this with my Voodoo 2. It was like having a Voodoo Banshee. Well, maybe not quite like, but my in-window Quake 2 framerates were quite decent.
I guess Megatron is a language model framework https://developer.nvidia.com/blog/announcing-megatron-for-tr...
3dfx was run like a frat house. It was fun while I was there, but it wasn't a surprise when they announced they were shutting down.
It was an incredibly exciting time.
Maybe I'm still misinterpreting, but this was where my mind went. Man, I wish I could've appreciated how distinct the 90s were as a kid, but I was too young and dumb to have a shred of hope of being that aware!
1920x1080 would be 800x600 back in the day, something everyone used for a viable (not just usable) desktop in order to be confortable with daily tasks such as browing and using a word processor. Not top-end, but most games would look nice enough, such as Unreal, Deus Ex and Max Payne at 800x600, which looked great.
I don't know the details myself but as a FYI... this famous answer covering the OpenGL vs DirectX history from StackExchange disagrees with your opinion and says OpenGL didn't keep up (ARB committee). It also mentions that the OpenGL implementation in Voodoo cards was incomplete and only enough to run Quake:
https://softwareengineering.stackexchange.com/questions/6054...
The author of that answer is active on HN so maybe he'll chime in.
At the point of rasterization in the pipeline you need some way to turn your 3D surface into actual pixels on the screen. What actual pixels do you fill in, and with what color values? For a triangle this is pretty trivial: project the three points to screen-space, then calculate the slope between the points (as seen on the 2D screen), and then run down the scanlines from top to bottom incrementing or decrementing the horizontal start/top pixels for each scanline by those slope values. Super easy stuff. The only hard part is that to get the colors/texture coords right you need to apply a nonlinear correction factor. This is what "perspective-correct texturing" is, support for which was one of 3dfx's marketing points. Technically this approach scales to any planar polygon as well, but you can also break a polygon into triangles and then the hardware only has to understand triangles, which is simpler.
But how do you rasterize a Bézier curve or NURBS surface? How do you project the surface parameters to screen-space in a way that doesn't distort the shape of the curve, then interpolate that curve down scanlines? If you pick a specific curve type of small enough order it is doable, but good god is it complicated. Check out the code attached the main answer of this stack overflow question:
https://stackoverflow.com/questions/31757501/pixel-by-pixel-...
I'm not sure that monstrosity of an algorithm gets perspective correct texturing right, which is a whole other complication on top.
On the other hand, breaking these curved surfaces into discrete linear approximations (aka triangles) is exactly what the representation of these curves is designed around. Just keep recursively sampling the curve at its midpoint to create a new vertex, splitting the curve into two parts. Keep doing this until each curve is small enough (in the case of Pixar's Reyes renderer used for Toy Story, they keep splitting until the distance between vertices is less than 1/2 pixel). Then join the vertices, forming a triangle mesh. Simple, simple, simple.
To use an analogy from a different field, we could design our supercomputer hardware around solving complex non-linear equations directly. But we don't. We instead optimize for solving linear equations (e.g. BLAS, LINPACK) only. We then approximate non-linear equations as a whole lot of many-weighted linear equations, and solve those. Why? Because it is a way easier, way simpler, way more general method that is easier to parallelize in hardware, and gets the same results.
This isn't an accidental historical design choice that could have easily gone a different way, like the QWERTY keyboard. Rendering complex surfaces as triangles is really the only viable way to achieve performance and parallelism, so long as rasterization is the method for interpolating pixel values. (If we switch to ray tracing instead of rasterization, a different set of tradeoffs come into play and we will want to minimize geometry then, but that's a separate issue.)
Microsoft pushed D3D to support their own self interest (which is totally an expected/okay thing for them to do), the way they evolved it made it both Windows only and ultimately incredibly complex (a lot of underlying GPU design leaks through the API into user code (or it did, I haven't written D3D code since DX10).
The lesson though, is that APIs "succeed", no matter what the quality, based on how many engineers are invested in having them succeed. Microsoft created a system whereby not only could a GPU vendor create a new feature in their GPU, they could get Microsoft to make it part of the "standard" (See the discussion of the GeForce drivers elsewhere) and that incentivizes the manufacturers to both continue to write drivers for Microsoft's standard, and to push developers to use that standard which keeps their product in demand.
This is an old lesson (think Rail Gauge standards as a means of preferentially making one company's locomotives the "right" one to buy) and we see it repeated often. One of the places "Open Source" could make a huge impact on the world would be in "standards." It isn't quite there yet but I can see inklings of people who are coming around to that point of view.
And the lossy, double-clutched analog nature of this crushed 2D clarity IME. I could immediately tell the difference just looking at a desktop screen. It was by far my biggest grievance and resulted in me staying away until fully-integrated solutions were available.
32bit on TNT at half the framerate, performance hit was brutal. 16bit on TNT was ugly AF due to bad internal precision while 3dfx did some dithering ~22bit magic
"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."
but thats exactly what you got on Voodoo2 with P2 450 back then in 640x480 https://www.bluesnews.com/benchmarks/081598.html
That brings back some memories... I remember having to pick the rendering pipeline on some games, like Quake.
I also remember the days of having to route the video cable from my graphics card to my 3dfx card then to my monitor.
Riva 128 (April 1997) to TNT (June 15, 1998) took 14 months, TNT2 (March 15, 1999) 8 month, GF256 (October 11, 1999) 7 months, GF2 (April 26, 2000) 6 months, | 3dfx dies here |, GF3 (February 27, 2001) 9 months, GF4 (February 6, 2002) 12 months, FX (March 2003) 13 months, etc ...
Nvidia had an army of hardware engineers always working on 2 future products in parallel, 3dfx had few people in a room.
OpenGL was the nicer API to use, on all platforms, because it hid all the nasty business of graphics buffer and context management, but in those days, it was also targeted much more at CAD/CAM and other professional use. The games industry wasn't really a factor in road maps and features. Since OpenGL did so much in the driver for you, you were dependent on driver support for all kinds of use cases. Different hardware had different capabilities and GL's extension system was used to discover what was available, but it wasn't uncommon to have to write radically different code paths for some rendering features based on the cababilities present. These capabilities could change across driver versions, so your game could break when the user updated their drivers. The main issue here was quite sloppy support from driver vendors.
DirectX was disgusting to work with. All the buffer management that OpenGL hid was now your responsibility, as was resource management for textures and vertex arrays. In DirectX, if some other 3D app was running at the same time, your textures and vertex buffers would be lost every frame, and you're have to reconstruct everything. OpenGL did that automatically behind the scenes. this is just one example. What DirectX did have, though was some form of certification, eg, "Direct X9", which guaranteed some level of features, so if you wrote your code to a DX spec, it was likely to work on lots of computers, because Microsoft did some thorough verification of drivers, and pushed manufacturers to do better. Windows was the most popular home OS, and MacOS was insignificant. OpenGL ruled on IRIX, SunOS/Solaris, HP/UX, etc, basically along the home/industry split, and that's where engineering effort went.
So, we game developers targeted the best supported API on Windows, and that was DX, despite having to hold your nose to use it. It didn't hurt that Microsoft provided great compilers and debuggers, and when XBox came out, which used the same toolchain, that finally cinched DX's complete victory, because you could debug console apps in the same way you did desktop apps, making the dev cycle so much easier. The PS1/PS2 and GameCube were really annoying to work with from an API standpoint.
Microsoft did kill OpenGL, but it was mainly because they provided a better alternative. They also did sabotage OpenGL directly, by limiting the DLL's shipped with windows to OpenGL 1.2, so you ended up having to work around this by poking into you driver vendor's OpenGL DLL and looking up symbols by name before you could use them. Anticompetitive as they were technically, though, they did provide better tools.
1) I remember buying that card with my first paycheck
2) Oh yeah, me and/or my buddy worked on that
As someone who was a little kid for most of the 90’s, all I can say is thanks to everyone who worked on the stuff — it was a truly magical time when every time your dad brought you to the computer shops they’d have some new demo set up they could do things that simply were not even imaginable a last time. Plus since I was not even a teenager yet, I didn’t have to learn about the battles between which model of rendering was better, it was just pure magic.
Hopefully we’ll speed things up again soon. I’m worried that there may be a generation at some point that doesn’t realize we’re in the future.
Intel would offer upto(?) (cant recall if it base, set, or upto) $1 million in marketing funds if me and my buddy did our objective and subjective gaming tests between the two looking for a subjective feel that the games ran better on Intel.
The objective tests were to determine if the games were actually using the SIMD instructions...
Which is what graphics developers wanted.
The problems with OpenGL and DirectX 11 were that you had to fight the device drivers to find the "happy path" that would allow you the maximum performance. And you had zero hope of doing solid concurrency.
Vulkan and DirectX 12 directly expose the happy path and are incresingly exposing the vector units. If you want higher level, you use an engine.
For game developers, this is a much better world. The big problem is that if you happen to be an application developer, this new world sucks. There is nowhere near the amount of money sloshing around to produce a decent "application engine" like there are "game engines".
Not for the games framerates indeed, but I did set my CRT monitor at 120 Hz to avoid eyestrain. You could effortlessly switch between many framerates from 60 Hz to 160 Hz or so on those monitors and it was just a simple setting.
Today it seems there now exist LCD monitors that can do (much) more than 60 Hz, but somehow it has to have all those vendor lock in sounding brandnames that makes it all sound a bit unreliable [in the sense of overcomplicated and vendor dependent] compared to back then, when it was just a number you could configure that was just a logical part of how the stuff worked.
It's not that hard--you must provide a way to use CUDA on your hardware. Either support it directly, transcompile it, emulate it, provide shims, anything. After that, you can provide your own APIs that take advantage of every extra molecule of performance.
And neither AMD nor Intel have thrown down the money to do it. That's all it is. Money. You have an army of folks in the space who would love to use anything other than Nvidia who would do all the work if you just threw them money.
You can still do that in some recent games, e.g. Doom 2016 and Half-Life: Alyx.
I never 'totally borked' a PC with LILO but definitely had to fix some things at least once, and that was with a nice thick Slackware 7.1 book to guide me.
GRUB, IIRC, vastly improved things but took a little while to get there and truly 'easy-peasy'
16 bit on TNT was fine for most of what I played at the time, although at the time it was mostly Quake/Quake2 and a few other games. Admittedly I was much more into 2d (especially strategy) games at the time, so 2d perf (and good VESA compat for dos trash and emulators) was more important to me for the most part.
I think 3dfx had a good product but lost the plot somewhere in between/combination of their cutting 3rd parties out of the market, and not deeply integrating as quickly vs considering binning. VSA-100 was a good idea in theory but the idea they could make a working board with 4 chips in sync at an affordable cost was too bold, and probably a sign they needed to do some soul seeking before going down that path.
Now, it's possible that comment is only discernable in hindsight only. After all, these folks had seemed like engineering geniuses with what they had already pulled off. OTOH, when we consider the cost jump of a '1 to 2 to 4 CPU' system back then... maybe everyone was a bit too optimistic.
They made a -lot- of stupid decisions, both in sticking to 'what they knew' and bad decisions with cutting-edge tech (N900 comes to mind, you couldn't get one in the states with the right bands for 3G).
I will always love the Lumia cameras however, even their shit tier models had great image quality.
Nvidia had a supercomputer and great hardware design software tools that were a trade secret and basically behind an off limits curtain in the center of their office and it helped them get chips out rapidly and on first turn. First turn means the first silicon coming back is good without requiring fixes and another costly turn.
I'd say 3dfx weren't poised to industrialize as well as Nvidia and they just couldn't keep up in the evolutionary race.
I'm not sure I understand where your worse is better idiom fits because 3dfx was better and Nvidia was worse but iterated to get better than 3dfx and won the day. Truly if worse was better in this case 3Dfx would still be around?
On the other hand triangle based rendering is a case of worse is better and Nvidia learned that and switched course from their early attempts with nurb based primitives.
To this day, this remains a mystery.
HW OpenGL was not available on the consumer machines (Win95, Win2K) at all, GLQuake used so-called "mini-driver" which was just a wrapper around few Glide APIs and was a way to circumvent id's contract with Rendition, which forbade them from using any proprietary APIs other than Verite (the first HW accelerated game they released had been VQuake), by the time the full consumer HW OpenGL drivers became available circa OpenGL 2.0 time, DirectX 9 already reigned supreme. You can tell by the number of OpenGL games released after 2004 (mobile games did not use OpenGL but the OpenGL ES, which is a different API).
Definitely not the case.
Voodoo cards were notorious for not supporting OpenGL properly. They supported GLide instead.
3dfx also provided a "minigl" which implemented the bare minimum functions designed around particular games (like Quake) -- because they did not provide a proper OpenGL driver.
Though as the other commenter noted, NVIDIA does like getting their money's worth out of the tensor cores, and FP8 will likely be a large part of what they're doing with it. Crazy stuff. Especially since the temporal domain is so darn exploitable when covering for precision/noise issues -- they seem to be stretching things a lot further than I would have expected.
In any case -- crazy times.
Many modern models are far more efficient for inference IIRC, though I guess it remains a good exercise in "how much can we fit through this silicon?" engineering. :D
I don't remember console development fondly. This is 25 years ago, so memory is hazy, but the GameCube compiler and toolchain was awful to work with, while the PS2 TOOL compile/test cycle was extremely slow and the API's were hard to work with, but that was more hardware craziness than anything. XBox was the easiest when it came out. Dreamcast was on the way out, but I remember really enjoying being clever with the various SH4 math instructions. Anyhow, I think we're both right, just in different times. In the DX7 days, NVIDIA and ATI were shipping OpenGL libraries which were usable, but yes, by then, DX was the 800lb gorilla on windows. The only reason that OpenGL worked at all was due to professional applications and big companies pushing against Microsoft's restrictions.
Interesting this should be my first video card that I bought myself. I do remember there being issues using specific renderer libraries, swapping out glide dll's and that sort of stuff to make certain games work. For a DOA card I sure was happy with it!
> Alternative for that would had been a bankrupcy around 2013.
The alternative would have been some restructuring by someone with a better idea than crashing the company and selling the name to his real bosses at MS.
The company was in trouble but salvageable. Elop flat-out murdered it, and it looked a lot like he did it to try to get a name brand for MS to use for its windows phones, which were failing badly (and continued to do so).
The part you're probably thinking of is GSync vs Freesync which is a feature for making a tear-free dynamic refresh rate, something that was simply impossible in the CRT days but does add some perceptual smoothness and responsiveness in games. Not using a compatible monitor just means you're doing sync with the traditional fixed rate system.
What has gotten way more complex is the software side of things because we're in a many-core, many-thread world and a game can't expect to achieve exact timing of their updates to hit a target refresh, so things are getting buffers on top of buffers and in-game configuration reflects that with various internal refresh rate settings.
How hard would it be to make a motherboard from that era?
I also had a Voodoo 3 with that box design back then, but different colors (Voodoo 3 3000 model). Actually still have the card...
The Voodoo Rush a year before that also had problems - it was a Voodoo 1 chip, but coupled to a slow 2d processor (back when that still mattered, like for just drawing windows or wallpaper in Windows) and had some compatibility problems.
It took until 3dfx's third iteration, with the Voodoo 3, to finally get 2d/3d integration right.
They should have continued maemo 5 and bet hard on it. The big rewrites for n9, and continued focus on symbian as a cash cow, hurt them.
I think that's roughly the upper limit, I think your contributing factors are going to be: 1. How much can you use tensor cores + normal CUDA cores in parallel (likely something influenced by ahead-of-time compilation and methods friendly to parallel execution, I'd guess?), 2. What's the memory format someone is using, 3. What's the dataloader like? Is it all on GPU? Is it bottlenecked? Some sort of complex, involved prefetching madness? 4. How many memory-bound operations are we using? Can we conceivably convert them to large matrix multiplies?, 5. How few total kernels can we run these calls in? 6. Are my tensors in dimensions that are a factor of 64 by 64 (if possible), or if that's not really helpful/necessary/feasible, a factor of 8? 7. Can I directly train in lower precision (to avoid the overhead of casting in any kind of way?)
That should get you pretty far, off the top of my head. :penguin: :D :))))) <3 <3 :fireworks:
Here is tip, start with the Star Wars presentation from Unreal engine.
I remember when a friend 1st got an i7 machine and we decided to see just how fast Turok 2 would go. I mean seeing Quake 3 go from barely 30fps to up near 1,000 FPS over the same time period, we figured it would be neat to see. Turns out it could barely break the 200 FPS mark even though it was a good 8 times the clock rate compared with the PC we originally played it on at near 60fps.
No use of SSE, no use of T&L units or Vertex/Pixel shaders. It is all very much just plane rasterisation at work.
https://en.m.wikipedia.org/wiki/TMS34010
https://en.m.wikipedia.org/wiki/Sega_Saturn
https://en.m.wikipedia.org/wiki/PlayStation_(console)
https://en.wikipedia.org/wiki/Nintendo_64
All predate 3dfx Voodoo launch in October 7, 1996.
Ditto with today's 1920x1080 desktop resolution on my Intel NUC and games at 1280x720.
But I could run 1280x1024@60 if I wanted. And a lot games would run fine at 1024x768.
- Saturn: It did quads.
- The N64 was an SGI machine made into a desktop. OK, not much earlier than the VooDoo.
> The lesson though, is that APIs "succeed", no matter what the quality, based on how many engineers are invested in having them succeed.
Exactly. Microsoft was willing to make things work for them. Something other vendors wouldn't do (including those who are ostensibly "open source").
For audio everyone was using 3rd party tools like Miles Sound System etc., but even OpenAL launched around 2000 already as an OpenGL companion. Video had the same thing happen with everyone using Bink which launched around 1999.
In comparison using OpenGL was a lot nicer than anything before probably DirectX 9. At that time in DX you needed pages and pages of boilerplate code just to set up your window, nevermind to get anything done.
Advanced GPU features of the time were also an issue, OpenGL would add them as extensions you could load, but in DirectX you were stuck until the next release.
And this tradition is carried on to this day in milder form when Western game devs hear Elden Ring is more popular than their game or try to "fix" visual novels and JRPGs without playing any of them.
Minor nitpick - Voodoo 3 and TNT2 weren't competing with Radeon. Among that generation were the 3dfx Voodoo 3, NVidia TNT2, S3 Savage4 and Matrox G400 - ATI's offering around this time was the Rage Fury.
The ATI Radeon was part of the next generation, along with the Geforce 256 and S3 Savage 2000.
edit: or possibly even the one after that? Wikipedia tells me the Rage Fury MAXX was out around the same time as Geforce 256, and Radeon only showed up around the time of the GeForce 2 family. The MAXX slipped from my memory, it was pretty iirc finicky card (performing badly and having buggy drivers, possibly hardware?) and a bit of a disappointment.
No. You should re-read that memo, specifically starting at "In 2008, Apple's market share in the $300+ price range was 25 percent; by 2010 it escalated to 61 percent." paragraph.
In 2010 nobody was interested in Symbian, no one else made phones on Symbian, no one would do apps for Symbian[0] - who would bother with all the Symbian shenanigans when even Nokia itself said what it would move to MeeGo 'soon', along with ~10% of smartphone market? Money was in Apple and Android.
To be salvageable you need something in demand on the market and Nokia only had a brand. You can't salvage an 18-wheeler running off the cliff.
Personally, I had a displeasure of trying to do something on colleague's N8 somewhere in 2011-2012. Not only it was slow as molasses but most of the apps which relied on Ovi were nonfunctional.
Insightful tidbit from N8 wiki page:
> At the time of its launch in November 2010 the Nokia N8 came with the "Comes With Music" service (also branded "Ovi Music Unlimited") in selected markets. In January 2011, Nokia stopped offering the Ovi Music Unlimited service in 27 of the 33 countries where it was offered.
So popular and salvageable what they discontinued the service 3 months after the launch? Should I remind you what Elop's memo was a month later, in February 2011?
[0] Yep, this is what really killed WP too - lack of the momentum on the start, inability to persuade Instagram to bake the app for WP, falling integrations (whey worked at the start! Then Facebook decided it doesn't want to be integrated anywhere because everyone should use their app) => declining market share => lack of interest from developers => declining market share => lack of...
Of course the main crime of the ex-Microsoft boss as the time wasn't that, but selling out most of SGI's IP to Microsoft and nVidia for some quick money.
At least in my eyes, might probably be also because it was the first VR headset I tried though. But the later DK2 and newer models did not capture that same feeling I had with the original DK..
We could write to buffers at 60 Hz effortlessly with computers from 1999, speeds have increased more than enough to write to buffers at 120 Hz and more, even with 16x more pixels.
1/120th of a second is a huge amount of time in CPU/GPU clock ticks, more than enough to compute a frame and write it to a double buffer to swap, and more threads should make that easier to do, not harder: more threads can compute pixels so pixels can be put in the buffer faster.
If there's problems with connector standards, software side of things, multithreading making it require third-party complexity, then that's a problem of those connector standards, the software, things like the LCD monitors themselves trying to be too smart and add delay, etc... Take also for example the AI upscaling done in NVidia cards now: adding yet more latency (since it needs multiple frames to compute this) and complexity (and I've seen it create artefacts too, then I'd rather just have a predictable bicubic or lanczos upscaling).
Same with audio: why do people tolerate such latency with bluetooth audio? Aptx had much less latency but the latest headphones don't support it anymore, only huge delay.
How does this relate to the NV-1? I thought it used quads instead of triangles. Did it do accelerated NURBs as well?
The GeForce blew the other cards performance out the water. The Matrox was particularly bad and the dual screen didn't add much and I remember maybe 2 games that supported it.
I was involved in a few of those committees, and sadly I have to agree.
The reason Khronos is often so slow to adopt features is because how hard it is for a group of competitors to agree on something. Everybody has an incentive to make the standard follow their hardware.
A notable exception to this was the OpenCL committee, which was effectively strongarmed by Apple. Everybody wanted Apple's business, so nobody offered much resistance to what Apple wanted.
But right now, I too wish we would have a more liveable planet instead of perfect entertainment in polluting, air-conditioned cities. But I studied the wrong topic for me to actually thrive in the former, and not the latter environment.
fun fact: the very same technique used by Freesync, delaying the vsync, works with CRTs
Genericness of Variable Refresh Rate (VRR works on any video source including DVI and VGA, even on MultiSync CRT tubes) https://forums.blurbusters.com/viewtopic.php?f=7&t=8889
Until 2020, this was always a myth. When matching features and performance, the price of a Mac was always within $100 of a PC that is its equal. Not anymore with Apple Silicon. Now when matching performance and features you'll have a PC costing twice as much or more.
https://vintage3d.org/pcx1.php
"Thanks to volumes defined by infinite planes, shadows and lights can be cast from any object over any surface."
"Voodoo2 Graphics uses a programmable color lookup table to allow for programmable gamma correction. The 16-bit dithered color data from the frame buffer is used an an index into the gamma-correction color table -- the 24-bit output of the gamma-correction color table is then fed to the monitor or Television."
It's not intuitively obvious to me how a rasterizer accelerator would render NURBS surfaces at all (edit: without just approximating the surface with triangles/quads in software, which any competing card could also do)
O2 dynamic memory sharing allowed things impossible on all other machines with its infinite texture memory, like mapping several videos seamlessly on moving 3D objects (also thanks to the built-in MJPEG encoding/decoding hardware).
Between a Celeron 333A running at 550MHz and a dual voodoo2 you could drive games at pretty ridiculous frame rates.
MMX is fixed point and shares register space with FPU. Afaik not a single real shipped game ever used MMX for geometry. Intel did pay some game studios to fake MMX support. One was 1998 Ubisoft POD with a huge "Designed for Intel MMX" banner on all boxes https://www.mobygames.com/game/644/pod/cover/group-3790/cove... while MMX was used by one optional audio filter :). Amazingly someone working in Intel "developer relations group" at the time is on HN and chimed in https://news.ycombinator.com/item?id=28237085
"I can tell you that Intel gave companies $1 million for "Optimized" games for marketing such."
$1 million for one optional MMX optimized sound effect. And this scammy marketing worked! Multiple youtube reviewers remember vividly how POD "runs best/fastest on MMX" to this day (LGR is one example).
$70 million in 2000 is not "$121,613,821.14 in 2023", it's "about $120 million in 2023".
The number is changing at about 7c/second. Those 14 cents are no value whatsoever.
I tried to find the information and the best I could find is this better than average discussion/podcast on the history of Nvidia.
They briefly touch on the chip emulation software that they felt they desperately needed to get back into the game after the NV1 was relegated.
The NV3 (Riva 128) was designed rapidly (six months) with the use of their what I called their super computer - a cluster of PCs or workstations most likely - running the proprietary chip emulation software. This advantage continued on further evolution of Nvidia hardware generations.
IIRC the chip emulation startup was started by a university friend of Jensen. The podcast says they failed later which is unfortunate.
https://www.acquired.fm/episodes/nvidia-the-gpu-company-1993...
- Curved surfaces in Quake 3: https://www.gamedeveloper.com/programming/implementing-curve...
- Rhino 3D support: https://www.rhino3d.com/features/nurbs/
Also, its extensions make it an interesting laboratory for various vendors' experiments.
But there were all sorts of options to restructure a company that big that had only just been surpassed by android at the time of the memo. Tanking what was left of the company and selling out to MS was probably the worst of them.
It's quite funny, the guardian article on the memo, that reproduces it in full, is here - https://www.theguardian.com/technology/blog/2011/feb/09/noki...
First comment below the line "If Nokia go with MS rather than Android they are in even bigger trouble."
Everyone could see it, apart from Stephen Elop, who was determined to deliver the whole thing to MS regardless of how stupid a decision it was.
That's the problem. Momentum and more importantly - momentum in the big ass company notorious for it's bureaucracy, red tape and cultural[0] policy of doing nothing above the needed yet jealously protecting own domain from anyone.
> Everyone could see it
You are forgetting what if Nokia pivoted to Android in Feb 2011, than the first usable, mass produced yet first one for the company unit (ie: with all bugs, errors a first thing in the line you can encounter) would be at Q4 2012 at the very, very best. More sane estimate (considering their total unfamiliarity with the platform and, again, cultural nuances) would say somewhere in Q1-Q2 2013. There it would then compete with the already established market of Android (Moto RAZR would be 1y+ old) and iPhone 5.
Even if they somehow managed to do it in less than a year (like a fairy godmother came and magically did everything for them, including production and shipping) then they would needed to compete with iPhone 4S, which as we know was extremely popular and people held them for years.
No fucking way they could do anything to stay afloat. That is why I say what would be bankrupt by 2013. You may don't like Elop and his actions as much as you want, but there is zero chances they could do anything themselves.
Oh, one more thing. Sure in 2009 Nokia had the money. By 2011 creditors lowered N. rating (reflected in the memo), which means what by 2012 they would have no spare money aka cash. And you probably forgetting what Nokia had a ridiculous amount of employees (which clearly seen by how many were let go by 2013-2014). You need many, many monies to support that amount of people and you can't tell them to fuck off like in the US with the at-will employment, you need to provide the severance and pension funds for everyone. Without money the only thing you can do is to close the doors and file for bankruptcy. If you want to continue your business - you need first to pay out the social responsibilities. And that costs money. And guess who not only had the money but was willing to pour them into Nokia?
[0] Literally. I've read the 'memoirs' of the guy who worked there before and while. I had a friend working in Finland some time later. The stories she told about passiveness, lack of enthusiasm, always trying to evade the responsibility - just confirmed me the things what I knew at that time, and no amount of naked sauna helps. Hell, they didn't even had the guts to fire her, instead making macabre dances to force her to quit. Which, after a more than half of year of doing literally nothing (their way to get her out), she gladly did and went to MS.
Funny enough, she is at Google now and some shit what is happening there directly resembles what was happening almost decade ago in Finland.
Unfortunately for mine the foolery didn't work perfectly and I recall the card was a bit unstable when running that way, or maybe it didn't achieve the optimal performance possible for one of those fiddled M64s.
Not really a huge surprise, IIRC it was a super cheap card...
I'm not forgetting anything. All of these issues are present in a switch to the Microsoft platform too, and it was already clear to most observers when they did that, that the MS platform was dead in the water.
> hey would needed to compete with iPhone 4S
As did every other player, and outside the US the iPhone was not dominant in the same way.
> you can't tell them to fuck off like in the US
They had engineering units in the US which they could have done that to. And there are things you can do in most European countries too, when situations are dire.
> You may don't like Elop and his actions as much as you want, but there is zero chances they could do anything themselves.
I very much disagree, as do many observers. It could have been turned around with good management, but that doesn't seem to have been Elop's aim, his aim seemed to be to fulfill goals for MS.
> And guess who not only had the money but was willing to pour them into Nokia?
Yes, it was a stitch-up job for MS to buy an established name to try to save their dead mobile platform.
fglrx has always been a terrible experience indeed, so AMD was no match for nvidia closed source driver.
So, once upon a time (I'd say 2000-2015) the best Linux driver for discrete GPUs was nVidia closed source one. Nowadays it's the amd open source one. Intel has always been good, but doesn't provide the right amount of power.
You were smarter than me. I wanted all those free compilers so badly I just went and installed redhat on the family pc. Ask me how well that conversation went with the old man...
The "Apple is expensive"-myth has been perpetuated since the days of 8-bit computing. Less expensive computers are cheaper because they have fewer features, use inferior parts, and are simply not as performant. But all that is behind us with Apple Silicon. Now you'd be hard-pressed to find a PC that performs half as well as the current line up of low-end Macs for their price.
For most entry level stuff performance is not that important so that's not the metric where customers focus on (price is). A desktop all-in-one from e.g. Lenovo starts at 600 euro's, the cheapest iMac starts at 1500. A reasonable Windows laptop starts at around 400 euro's while MacBook air starts at 1000 euro's. It's not that the Apple machines aren't better, it's just that lots of folks here don't want to pay the entry fee.
Same reason most people here don't drive BMWs but cheaper cars.
Also had no idea about them paying for those optimizations but I am not surprised one bit. It is very in character for Intel. ;)