I'd say you have much too narrow a view Stahlfaustus.
You might get that impression with the few sentences i wrote in this thread especially considering the topic, but in fact i'm the guy that is usually in favor of more radical approaches towards optimizing algorithms.

Take a look at ATIs planned implementation of Megatexture in their technology:
They use the gpu memory as a rendering cache and the main memory as the storage medium to hold all the graphics data.
Megatexture copies only the needed portions of a texture from main memory to graphics memory thereby reducing the need for bandwidth as there's no need to copy the whole texture from main memory to graphics memory.
So far so good, but what purpose does it serve to store graphics data in main memory anyway?
None! None except for the fact that main memory is usually cheaper than graphics memory and therefore larger which gives the option to store more data in ram reducing the need to read it from the somewhat slow harddrive.
The main cpu does nothing with the textures so this doesn't serve any other purpose than storing the data where it is not needed and bringing the need with it to permanently copy data from one memory to the next for every single frame. This is an unacceptable waste of resources.

Wouldn't it make more sense to add slow but large memory to the graphics card as a data cache instead of abusing the main memory?
Imagine a graphics card with fast graphics ram for rendering the graphics for the actual frame (for example 2 GB high clock speed ram) and a second large data memory (for example 8 GB of slower and cheaper and maybe upgradeable data ram) that holds all graphics data. This would make copying the graphics data from main memory completely obsolete and would put the data where it is needed anyway: Inside the graphics sub system!
Yes, this would make graphics cards more expensive but it would make the world for software developers (including driver programmers) a lot easier and the hardware a lot better performing.
One step closer to the "direct to metal" programming we need on PCs to cope with the more succesful consoles!

Dramatic improvements in both memory and display technologies are already coming onto the market including IBM's memristors and LG's new 84" OLED retina TV just to name two out of many others. The issue isn't whether one game like Rage can benefit from such technology, but the future of putting all that increased bandwidth and display pixels to good use. Computing is all about the next big thing and tessellation is yesterday's news and just not up to the task. Past a certain point it simply has diminishing returns as Carmack himself has pointed out and we need to move on to new standards beyond rasterization. That means sparse voxel octrees, megatextures, and ray cast geometry that are capable of things rasterization alone just can't do.
As i posted on this forum somewhere else before: I consider rasterization at the end of it's lifecycle. It takes some tremendous amount of math power to get rid of the artifacts that come with it. We spend more clock cycles on correcting artifacts that come with rasterization than rasterizing the pixels themselves.
I see Megatexture as a technology that pushes the boundaries of RASTERIZATION a tiny little bit further but i can't see how this would help in rendering voxels as voxels shouldn't need any textures. If someone here has experience with voxels (i don't), please correct me.
As for raytracing: It enhances the visual quality but at a tremendous loss in rendering speed. In fact, the cost benefit anolysis would fall short since "casual joe" wouldn't really see the difference in quality but would immediately notice the lower frame rate or loss in detail.
In my opinion, it is too early to talk about raytracing as an option to raterization.
I'm curious what the next generation of gaming consoles will deliver. The advantage of bringing high def graphics to the home consumer as seen with this generation won't be there with the next. And the 3D Stereo technology doesn't look like it's going to be of much importance.
If they still use rasterization, it won't give that much better results.
My personal preference would be to give voxels (in addition to rasterization) a chance.
As for hardware acceleration, again, its also a question of what will work for portable devices. Rage can already play on a wimpy iPhone and with this kind of acceleration and improved types of memory who knows what is possible in future. AMD routinely experiments with new technologies on their high end graphics cards and then begins porting it to cheaper systems. Their long term plans include combining their new bulldozer cpu architecture with this new southern islands gpu architecture in their Trinity APU which could redefine affordable desktop and portable gaming.
Yep. But with this big trillion dollar business it's hard to take a risk and deliver a technology that doesn't survive the laws of the market.
The customer usually waits until a technology is widely spread and this makes the situation for tremendous changes in graphics technology a tough task.
I guess this is why Carmack implemented the Megatexture stuff into his engine. It enhances existing technology but doesn't turn everything around.
When computer graphics became really big business it stopped being innovative and quickly developing.
In the 80s and 90s we had sprites, voxels, polygons (with and without textures) and sometimes even mathematical shapes like cylinders, circles and spheres (Tron the movie!) being rendered in graphics systems.
Today it's just textured polygons because the industry needs a reliable and easy to handle technology....
