Being involved in the motion picture industry, I can tell you how much it counts to have companies like Nvidia bringing more computing power almost every three months.
First of all, there is no such thing as "software" rendering, at least not anymore. Software rendering was a good term when the graphics hardware only offered limited rendering capabilities. Not so long ago OpenGL was only used for previewing and then the software used its own engine for the final render.
This is all gone! The new graphics cards are programmable (and yes you can still code in assembler, thanks god!) and yes you can achieve the same things as in a "software" render engine.. but just remember one thing:
graphics cards are nowadays like specialized coprocessors for massive parallel processing. They do not know how to handle complex data structures, because that's not their purpose! Their purpose is to process as fast as possible massive amounts of data. This is still left to the CPU and it's good at that (almost).
Besides graphics rendering and since they are programmable, one can actually process other types of data, such as sound, etc. Yes, you can! Eventually you will need to output your processed data to your favorite Soundblaster but still, the processing part can be done with the "graphics card".
I also put software into quotes because optimizing code for the CPU involves the use of specific instructions, sometimes dedicated to graphics... so I'm sorry, I cannot see what difference it makes between programming an Intel CPU or an Nvidia GPU, as long as we talk about processing data (i.e. calculus).
Having an Amiga background, I'm quite used to this kind of thinking, where you rely on specialized hardware to to specific tasks. On the Amiga the Motorola 68000 was quite weak (only 7.14MHz) and without the Blitter, the Copper and other nice dedicated circuitry, well... the Amiga would not have been the Amiga.
To conclude, I would say that it was about time that graphic card manufacturers bring such power to the PC. And believe me or not, they still have to work hard in order to unleash all the possibilities. The major problem is that all this debate is clouded by the gaming industry (which pours 99.99% of the money in the R&D for GPUs) and of course with the modern requirements an multi-core CPU cannot compete. Why? simply because its architecture is not made for that. Period.
I see the future bright and colorful regarding this subject. The more power is brought to us by Nvidia and others (do they still exist??) the better our machines will be.
And regarding the ability of the "coders" to follow up with the new "hardware" capabilities, I'd simply blame the coders. Most of them (especially in the game development companies) reuse existing code because of the extreme financial demands of this industry. Basically it's "release or die". They do not have or take time to explore all the possibilities.
Finally if I can suggest something to the hardware manufacturers is to bring back the wonderful idea of having direct access to the graphics memory, registers, etc in order to speed up some tasks. I'm still disappointed to see that the basic architecture of a PC did not change since the early 90's!!!