Dark Bit Factory & Gravity

GENERAL => General chat => Topic started by: Shockwave on June 01, 2008

Title: End for software rendering?
Post by: Shockwave on June 01, 2008
This is quite interesting, I don't know how true it is but the letter seems to make sense..


http://www.theinquirer.net/gb/inquirer/news/2008/04/24/nvidia-declares-war-intel (http://www.theinquirer.net/gb/inquirer/news/2008/04/24/nvidia-declares-war-intel)
Title: Re: End for software rendering?
Post by: Jim on June 01, 2008
For general computing tasks, it's obvious Intel are stuck around 4GHz and their solution is more cores.  That's a trick GPU makers have been using all along - more texture/vertex units backed up by huge memory bandwidth.  Who knows where it will end up?

Interestingly, the PS3 Linux people are frantically working on a software rendered OpenGL driver to run on the Cells because Sony won't release access to the nVidia RSX chip!

Jim
Title: Re: End for software rendering?
Post by: rain_storm on June 02, 2008
I cant see softrendering dying out anytime soon as long as GPU's have existed we still decide to softrender but I would love to see the day when my CPU can step in whenever my NVidia card needs to fall back to software rendering. Or the day when we have as much freedom to program the GPU as we do with the CPU. With full access to all registers and RAM, etc. When we have that freedom then maybe softrendering will be a thing of the past.
Title: Re: End for software rendering?
Post by: Shockwave on June 02, 2008
I would love to see the day when my CPU can step in whenever my NVidia card needs to fall back to software rendering.

Doesn't that happen already?
Perhaps when this happens, the CPU is given such a small amount of the system resources that it can't cope?

The best examples of CPU based software rendering I have seen on this forum are from Stonemonkey, his 3D engines in various stages, and also to some extent, thygrions (now known as ferris) raytracer. These programs show that it is possible to render very complicated 3D scenes in 640X480 at a good frame rate on a modest! (say, 3.0ghz) pc.

Those examples are pretty highly optimised, and there are probably not many occasions where gl needs to fall back on software for everything..

Remember that the Atari ST did some amazing stuff with less than 8mhz and without a lot of the hardware that the Amiga had.

In fact, just look at elite on a 386dx2. Or on a bbc for that matter.
Title: Re: End for software rendering?
Post by: rain_storm on June 02, 2008
Ahh I thought that the GPU all that software implementations for shaders and what not.

I know what your saying though. There are a lot of things that can be done without hardware accelation that are just as good as anything else out there. That is something that wont just disappear just because a hardware version is available
Title: Re: End for software rendering?
Post by: Shockwave on June 02, 2008
Ahh I thought that the GPU all that software implementations for shaders and what not.

I dont know if I am right or not Rain, but I do have a laptop here with no gfx card and all my intros which use opengl seem to run fine (all be it slowly) on it.
Title: Re: End for software rendering?
Post by: ninogenio on June 02, 2008
i dont think software rendering will die out for quite a while. if our systems change to gfx orientated parrellel processors like the spu's in the ps3 i think people with the know will just change the way ptc or some other pixel rendering software works to make it compatible. and if gfx chips were the way forward they would have to be a lot more flexable than they curently are which means they would have to be able to acept opcodes in much the same maner that our standard cpu does.

what i think would make sense right now and id like to see. is for hardware vendors to slow down a bit and let software developers catch up i mean you can have two quad core 64 bit intel cpu's with two core sli geforce 9xxxx cards and god knows what else but is software going to push that for a while? cant see it....

in the good old days programers pushed on hardware vendors for speed and now thats not so much the case, its a bit sad really, and by the looks of it only going to be made worse.
Title: Re: End for software rendering?
Post by: Paul on June 05, 2008
http://fastra.ua.ac.be/en/index.html
A computer with 4 Nvidia 9800gx2 that performs almost as well, or even better than a supercomputer with 512 celeron processors at least in their Tomography tests.
Title: Re: End for software rendering?
Post by: hellfire on June 05, 2008
Quote
http://fastra.ua.ac.be/en/index.html (http://fastra.ua.ac.be/en/index.html)
I totally dig that accent & gesture of Dr K.J. Batenburg :D
Using dedicated graphics-hardware to perform image-processing is not an *exceptional* clever idea, though...
Title: Re: End for software rendering?
Post by: rain_storm on June 10, 2008
Quote from: Wikipedia
Quote
(http://en.wikipedia.org/wiki/GeForce_9_Series[/url)
Dual PCBs, dual GPU design
197 W power consumption [14].
Two 65nm process GPUs, with 256 total Stream Processors (128 per PCB)[15].
The 9800 GX2 is at least 30% faster than the 8800 Ultra.
Supports Quad SLI.
1 GiB (512 MB per PCB) memory framebuffer.
Supports DirectX 10, Shader Model 4, OpenGL 2.1, and PCI-Express 2.0.
Outputs include two DVI ports, an HDMI output, and S/PDIF in connector on-board for routing audio through the HDMI cable [16].
An 8-pin and a 6-pin power connector.
Clocks (Core/Shader/Memory): 600 MHz/1500 MHz/2000 MHz [17]
256-bit memory interface[17]
128 GB/s memory bandwidth[17]
Released date: March 18, 2008
1GB memory (times 4 = 4GB)
 128 GB/s bandwidth (times 4 = 512 GB/s)
 clock speed 600MHz / 1500MHz / 2000MHz (times 4 = 2.4GHz - 8GHz!!!)

with that kind of bandwidth its no wonder they outperform CPU's. 128GB/s is an incredible bus speed. Add to that the fact that the GPU has hardware implementations to speed up certain calculations.
Its not the CPU or the GPU that most of the money will be going into over the next few years but the memory. This is still by far the slowest part of any system.
Title: Re: End for software rendering?
Post by: HiQ on June 11, 2008
I was recently thinking of the same/similar question about software vs hardware rendering: wondering if I should just go ahead and focus on opengl or do my own little things on software. The obvious thing was that for the big mainstream stuff (read: games) hardware via opengl or d3d is largely the only way to go - and this has been true for some years with the games requiring hardware pixel shaders and T&L. As for things in general, who knows. AMD bought ATI and Nvidia made a deal with Via to work with their 'Nano', which is a CPU. So it seems the competition just hates Intel, not CPUs. :P

That said, for the time being I've decided to do my own little things, in software. I can (hopefully) exploit special cases for size and speed and generic 'I-Made-That' fun advantages. Also, 3D accelerator people are getting diminishing returns by now: some games with HDR rendering were hyped about, yet apart from an occasional neat specular effects, scenes often looked unnatural or plain worse... So I'd rather see a(n eventual) growth in AI, dynamic sound, and all such content requiring generic processing units.
Title: Re: End for software rendering?
Post by: Stonemonkey on June 17, 2008
I just don't get that at all.
Title: Re: End for software rendering?
Post by: ferris on June 25, 2008
Actually, Rain_storm, it used to be (and maybe still is?) possible to code asm for the GPU with shaders (pre-HLSL age)...I'm not sure if it is still possible.

This gives the exact same amount of control over the GPU as we have over the CPU, except that GPU's have slightly different instruction sets than CPU's.

Someone correct me if I'm wrong here.
Title: Re: End for software rendering?
Post by: Jim on June 25, 2008
Direct3D used to make you do that.  The documentation is almost non-existant though.  I would love to see a good doc.  These days I believe they have a shader compiler, like OpenGL, but I haven't researched any of that...yet :D

Imagine your data set as a texture and your output as colours/bitmaps or a 1d/2d array of values - you can kind of see how GPU programming might work.  The tricky part is organising your data so you can look at it in the ways that GPUs can look at.

Jim
Title: Re: End for software rendering?
Post by: stormbringer on June 25, 2008
Being involved in the motion picture industry, I can tell you how much it counts to have companies like Nvidia bringing more computing power almost every three months.

First of all, there is no such thing as "software" rendering, at least not anymore. Software rendering was a good term when the graphics hardware only offered limited rendering capabilities. Not so long ago OpenGL was only  used for previewing and then the software used its own engine for the final render.

This is all gone! The new graphics cards are programmable (and yes you can still code in assembler, thanks god!) and yes you can achieve the same things as in a "software" render engine.. but just remember one thing:

graphics cards are nowadays like specialized coprocessors for massive parallel processing. They do not know how to handle complex data structures, because that's not their purpose! Their purpose is to process as fast as possible massive amounts of data. This is still left to the CPU and it's good at that (almost).

Besides graphics rendering and since they are programmable, one can actually process other types of data, such as sound, etc. Yes, you can! Eventually you will need to output your processed data to your favorite Soundblaster but still, the processing part can be done with the "graphics card".

I also put software into quotes because optimizing code for the CPU involves the use of specific instructions, sometimes dedicated to graphics... so I'm sorry, I cannot see what difference it makes between programming an Intel CPU or an Nvidia GPU, as long as we talk about processing data (i.e. calculus).

Having an Amiga background, I'm quite used to this kind of thinking, where you rely on specialized hardware to to specific tasks. On the Amiga the Motorola 68000 was quite weak (only 7.14MHz) and without the Blitter, the Copper and other nice dedicated circuitry, well... the Amiga would not have been the Amiga.

To conclude, I would say that it was about time that graphic card manufacturers bring such power to the PC. And believe me or not, they still have to work hard in order to unleash all the possibilities. The major problem is that all this debate is clouded by the gaming industry (which pours 99.99% of the money in the R&D for GPUs) and of course with the modern requirements an multi-core CPU cannot compete. Why? simply because its architecture is not made for that. Period.

I see the future bright and colorful regarding this subject. The more power is brought to us by Nvidia and others (do they still exist??) the better our machines will be.

And regarding the ability of the "coders" to follow up with the new "hardware" capabilities, I'd simply blame the coders. Most of them (especially in the game development companies) reuse existing code because of the extreme financial demands of this industry. Basically it's "release or die". They do not have or take time to explore all the possibilities.

Finally if I can suggest something to the hardware manufacturers is to bring back the wonderful idea of having direct access to the graphics memory, registers, etc in order to speed up some tasks. I'm still disappointed to see that the basic architecture of a PC did not change since the early 90's!!!