AMD Implementing Hardware Acceleration for Megatextures?

Post » Sat May 12, 2012 3:50 pm

Speed is only an issue because cost is an issue. Its all about getting the maximum bang for your buck and unless you have a serious need for more accuracy and speed then you don't go use cpu processors because the cost is simply too prohibitive.

When talking about real time graphics, speed is also an issue for making certain features available at all. Imagine doing frame buffering including AA in main memory by the main cpu. Absolutely impossible to maintain acceptable framerates. Also vertex shading can be done in software, pixel shading is impossible to do that way unless you want 1 frame per second.
As for AMD's chip design, the idea that the marketing weasels somehow decide what goes on a chip is laughable. This is about the evolution of their existing architecture and long term chip designs.
Why is that laughable? It's the marketing and design guys that decide what's going into a software. It's usually not the programmer that does this. Same with hardware. It's not the tech's decision alone to decide what features are going into a chip. The product has to sell, so it is necessary to think about the possible costs and the resulting sales. If you implement features that nobody uses or lack features that might be the next standard you'll have a hard time selling your product. It is crucial to think about this before releasing a product. Look at Apple: A perfect example for a company where the business and not the tech guys are in command and Apple products sell like sliced bread.
If partially resident textures are not compatible with their current designs they've added hardware acceleration because they believe it is that important to future designs. It is the future of video games if they are to ever move beyond the limits of rasterization which every major developer on the planet is already researching. Again, its about providing the best looking picture as fast and as cheap as possible. Megatextures are merely the first step in that direction and until somebody comes up with something better I expect to see very major gpu manufacturer come out with their own hardware acceleration for partially resident textures in the near future.
Don't get me wrong here:
i cherish any kind of enhancement to contemporary real time rendering technology. It's not that i make fun of what Ati is planning. It's just that i see this as the typical "we lack a feature so implement it and talk about it" situation to regain lost reputation. As i said: Megatexture is performing very well in software. Rage is performing very well on consoles and on Nvidia cards. It has some problems on Ati based pc systems so this is no coincidence.
User avatar
Ebou Suso
 
Posts: 3604
Joined: Thu May 03, 2007 5:28 am

Post » Sun May 13, 2012 4:54 am

Because doing something in hardware is always way faster. You can even do software tessellation, but it would kill the performance.Megatexture is able to run in software, but could offer way more performance if it's done in hardware. More performance means better image quality (on the same hardware) as you can use better/more geometry, lighting and textures.

Right! But when looking at both techniques, tesselation will gain way more performance through hardware implementation than Megatexture.
Tesselation is a mathematical process that heavily benefits from adding additional transistors on a chip to do the work.
Megatexture is primarily a data managing process that does not include serious amounts of mathematical calculations like effects shading and so on.
I know that it is primarily the textures that require bandwidth so using alternative caching methods makes sense.
But when looking at Rage's bottlenecks, it's primarily (main)memory size and (harddrive)memory access speed and not so much the performance and memory of the used gpu.
What we see here is another try to manage data transfer from (cheaper) large system memory to (expensive) smaller graphics card memory. Megatexture is a great thing so no criticism towards this technology but i doubt it will heavily benefit from a hardware implementation.
User avatar
Eve(G)
 
Posts: 3546
Joined: Tue Oct 23, 2007 11:45 am

Post » Sat May 12, 2012 5:36 pm

I'd say you have much too narrow a view Stahlfaustus. Dramatic improvements in both memory and display technologies are already coming onto the market including IBM's memristors and LG's new 84" OLED retina TV just to name two out of many others. The issue isn't whether one game like Rage can benefit from such technology, but the future of putting all that increased bandwidth and display pixels to good use. Computing is all about the next big thing and tessellation is yesterday's news and just not up to the task. Past a certain point it simply has diminishing returns as Carmack himself has pointed out and we need to move on to new standards beyond rasterization. That means sparse voxel octrees, megatextures, and ray cast geometry that are capable of things rasterization alone just can't do.

As for hardware acceleration, again, its also a question of what will work for portable devices. Rage can already play on a wimpy iPhone and with this kind of acceleration and improved types of memory who knows what is possible in future. AMD routinely experiments with new technologies on their high end graphics cards and then begins porting it to cheaper systems. Their long term plans include combining their new bulldozer cpu architecture with this new southern islands gpu architecture in their Trinity APU which could redefine affordable desktop and portable gaming.
User avatar
michael danso
 
Posts: 3492
Joined: Wed Jun 13, 2007 9:21 am

Post » Sat May 12, 2012 3:36 pm

I'd say you have much too narrow a view Stahlfaustus.

You might get that impression with the few sentences i wrote in this thread especially considering the topic, but in fact i'm the guy that is usually in favor of more radical approaches towards optimizing algorithms. :biggrin:

Take a look at ATIs planned implementation of Megatexture in their technology:
They use the gpu memory as a rendering cache and the main memory as the storage medium to hold all the graphics data.
Megatexture copies only the needed portions of a texture from main memory to graphics memory thereby reducing the need for bandwidth as there's no need to copy the whole texture from main memory to graphics memory.
So far so good, but what purpose does it serve to store graphics data in main memory anyway?
None! None except for the fact that main memory is usually cheaper than graphics memory and therefore larger which gives the option to store more data in ram reducing the need to read it from the somewhat slow harddrive.
The main cpu does nothing with the textures so this doesn't serve any other purpose than storing the data where it is not needed and bringing the need with it to permanently copy data from one memory to the next for every single frame. This is an unacceptable waste of resources. :yuck:

Wouldn't it make more sense to add slow but large memory to the graphics card as a data cache instead of abusing the main memory?
Imagine a graphics card with fast graphics ram for rendering the graphics for the actual frame (for example 2 GB high clock speed ram) and a second large data memory (for example 8 GB of slower and cheaper and maybe upgradeable data ram) that holds all graphics data. This would make copying the graphics data from main memory completely obsolete and would put the data where it is needed anyway: Inside the graphics sub system!

Yes, this would make graphics cards more expensive but it would make the world for software developers (including driver programmers) a lot easier and the hardware a lot better performing.
One step closer to the "direct to metal" programming we need on PCs to cope with the more succesful consoles! :biggrin:

Dramatic improvements in both memory and display technologies are already coming onto the market including IBM's memristors and LG's new 84" OLED retina TV just to name two out of many others. The issue isn't whether one game like Rage can benefit from such technology, but the future of putting all that increased bandwidth and display pixels to good use. Computing is all about the next big thing and tessellation is yesterday's news and just not up to the task. Past a certain point it simply has diminishing returns as Carmack himself has pointed out and we need to move on to new standards beyond rasterization. That means sparse voxel octrees, megatextures, and ray cast geometry that are capable of things rasterization alone just can't do.

As i posted on this forum somewhere else before: I consider rasterization at the end of it's lifecycle. It takes some tremendous amount of math power to get rid of the artifacts that come with it. We spend more clock cycles on correcting artifacts that come with rasterization than rasterizing the pixels themselves.
I see Megatexture as a technology that pushes the boundaries of RASTERIZATION a tiny little bit further but i can't see how this would help in rendering voxels as voxels shouldn't need any textures. If someone here has experience with voxels (i don't), please correct me.

As for raytracing: It enhances the visual quality but at a tremendous loss in rendering speed. In fact, the cost benefit anolysis would fall short since "casual joe" wouldn't really see the difference in quality but would immediately notice the lower frame rate or loss in detail.
In my opinion, it is too early to talk about raytracing as an option to raterization.

I'm curious what the next generation of gaming consoles will deliver. The advantage of bringing high def graphics to the home consumer as seen with this generation won't be there with the next. And the 3D Stereo technology doesn't look like it's going to be of much importance.
If they still use rasterization, it won't give that much better results.
My personal preference would be to give voxels (in addition to rasterization) a chance.


As for hardware acceleration, again, its also a question of what will work for portable devices. Rage can already play on a wimpy iPhone and with this kind of acceleration and improved types of memory who knows what is possible in future. AMD routinely experiments with new technologies on their high end graphics cards and then begins porting it to cheaper systems. Their long term plans include combining their new bulldozer cpu architecture with this new southern islands gpu architecture in their Trinity APU which could redefine affordable desktop and portable gaming.
Yep. But with this big trillion dollar business it's hard to take a risk and deliver a technology that doesn't survive the laws of the market.
The customer usually waits until a technology is widely spread and this makes the situation for tremendous changes in graphics technology a tough task.
I guess this is why Carmack implemented the Megatexture stuff into his engine. It enhances existing technology but doesn't turn everything around.

When computer graphics became really big business it stopped being innovative and quickly developing.
In the 80s and 90s we had sprites, voxels, polygons (with and without textures) and sometimes even mathematical shapes like cylinders, circles and spheres (Tron the movie!) being rendered in graphics systems.
Today it's just textured polygons because the industry needs a reliable and easy to handle technology.... :blush:
User avatar
Miss K
 
Posts: 3458
Joined: Sat Jan 20, 2007 2:33 pm

Post » Sat May 12, 2012 8:30 pm

You might get that impression with the few sentences i wrote in this thread especially considering the topic, but in fact i'm the guy that is usually in favor of more radical approaches towards optimizing algorithms. :biggrin:

Take a look at ATIs planned implementation of Megatexture in their technology:
They use the gpu memory as a rendering cache and the main memory as the storage medium to hold all the graphics data.
Megatexture copies only the needed portions of a texture from main memory to graphics memory thereby reducing the need for bandwidth as there's no need to copy the whole texture from main memory to graphics memory.
So far so good, but what purpose does it serve to store graphics data in main memory anyway?
None! None except for the fact that main memory is usually cheaper than graphics memory and therefore larger which gives the option to store more data in ram reducing the need to read it from the somewhat slow harddrive.
The main cpu does nothing with the textures so this doesn't serve any other purpose than storing the data where it is not needed and bringing the need with it to permanently copy data from one memory to the next for every single frame. This is an unacceptable waste of resources. :yuck:

Wouldn't it make more sense to add slow but large memory to the graphics card as a data cache instead of abusing the main memory?
Imagine a graphics card with fast graphics ram for rendering the graphics for the actual frame (for example 2 GB high clock speed ram) and a second large data memory (for example 8 GB of slower and cheaper and maybe upgradeable data ram) that holds all graphics data. This would make copying the graphics data from main memory completely obsolete and would put the data where it is needed anyway: Inside the graphics sub system!

Yes, this would make graphics cards more expensive but it would make the world for software developers (including driver programmers) a lot easier and the hardware a lot better performing.
One step closer to the "direct to metal" programming we need on PCs to cope with the more succesful consoles! :biggrin:

The entire computer industry is dedicated to moving everything onto the cpu and here you are suggesting we move more of it onto the gpu. :blink:

System ram and long term storage are two of the least likely components to migrate onto the cpu in the near future except in the most portable systems. Hence, the idea is to design circuitry to take advantage of the system ram whenever possible. The texture streaming technology of Rage is focused on maximizing the use of whatever components you have so if you have a wimpy gpu it will use the cpu more and vice versa. If you have both a wimpy cpu and gpu or an APU it can still use the system ram to speed up the process. Again, the idea is to create circuitry and programs that can be used on even cheap portables and not just the most expensive gaming rigs or consoles.

As i posted on this forum somewhere else before: I consider rasterization at the end of it's lifecycle. It takes some tremendous amount of math power to get rid of the artifacts that come with it. We spend more clock cycles on correcting artifacts that come with rasterization than rasterizing the pixels themselves.
I see Megatexture as a technology that pushes the boundaries of RASTERIZATION a tiny little bit further but i can't see how this would help in rendering voxels as voxels shouldn't need any textures. If someone here has experience with voxels (i don't), please correct me.

As for raytracing: It enhances the visual quality but at a tremendous loss in rendering speed. In fact, the cost benefit anolysis would fall short since "casual joe" wouldn't really see the difference in quality but would immediately notice the lower frame rate or loss in detail.
In my opinion, it is too early to talk about raytracing as an option to raterization.

I'm curious what the next generation of gaming consoles will deliver. The advantage of bringing high def graphics to the home consumer as seen with this generation won't be there with the next. And the 3D Stereo technology doesn't look like it's going to be of much importance.
If they still use rasterization, it won't give that much better results.
My personal preference would be to give voxels (in addition to rasterization) a chance.

First off, its not rasterization verse ray tracing or whatever. Its about using every possible tool in the tool kit to produce the best product you can and the planned id tech 6 engine will combine ray cast geometry with rasterization.

Second, the "megatexture" circuitry on AMD's new gpu consists of simplified cpu processors. Its a parallel processing computer that can be used for ray casting as well. The point is, again, to figure out the best way to stream all the data to the gpu and to maximize the available bandwidth on any system. I'm sure the technical details get more complicated then that, but that's the basic gist of what they are doing.


Yep. But with this big trillion dollar business it's hard to take a risk and deliver a technology that doesn't survive the laws of the market.
The customer usually waits until a technology is widely spread and this makes the situation for tremendous changes in graphics technology a tough task.
I guess this is why Carmack implemented the Megatexture stuff into his engine. It enhances existing technology but doesn't turn everything around.

When computer graphics became really big business it stopped being innovative and quickly developing.
In the 80s and 90s we had sprites, voxels, polygons (with and without textures) and sometimes even mathematical shapes like cylinders, circles and spheres (Tron the movie!) being rendered in graphics systems.
Today it's just textured polygons because the industry needs a reliable and easy to handle technology.... :blush:

There is no doubt whatsoever people want this technology and they want it to be portable. Some 80% of tablet owners already use their devices for gaming and India has just produced the first $50.oo tablet PC. Within ten years every snot nosed kid will be dragging around a cheap Walmart tablet or phone capable of playing games like Crysis and Rage. A voice activated one capable of surfing the web, reading books, helping with their homework, or finding their mother.
User avatar
luke trodden
 
Posts: 3445
Joined: Sun Jun 24, 2007 12:48 am

Post » Sun May 13, 2012 5:52 am

The entire computer industry is dedicated to moving everything onto the cpu and here you are suggesting we move more of it onto the gpu. :blink:
Is that so? Than the entire computer industry is looking for cost reduction and not performance optimization. :biggrin:

System ram and long term storage are two of the least likely components to migrate onto the cpu in the near future except in the most portable systems.
I'm not talking about system ram, i'm talking about dedicated ram for graphics data. No more no less. With these modern multicore CPUs with 6 cores or more, system ram is already a bottleneck so why add additional data copying operations there when not necessary? It makes sense to move all graphics related data to a single section instead of permanently swapping it in and out.
If the bucket can't hold enough water to put out the fire, get a bigger bucket instead of permanently running between the fire and the well. :tongue:
This applies to all hardware, mobile or not. But with mobile hardware the costs are also a factor, so it is highly likely that mobile systems (and consoles) will use a cheaper approach in designing a hardware like unified memory architecture and so on.

Hence, the idea is to design circuitry to take advantage of the system ram whenever possible.
Again: Does this make sense from a performance viewpoint? Nope...

The texture streaming technology of Rage is focused on maximizing the use of whatever components you have so if you have a wimpy gpu it will use the cpu more and vice versa. If you have both a wimpy cpu and gpu or an APU it can still use the system ram to speed up the process. Again, the idea is to create circuitry and programs that can be used on even cheap portables and not just the most expensive gaming rigs or consoles.
Which doesn't really work with about 50% of all available pc systems, assuming about half of all installed gaming PCs use ATI/AMD graphics cards.
There are several posts in this forum where people report 100% main processor load when running Rage and getting 1 frame per second since the engine doesn't make use of the installed graphics hardware.

First off, its not rasterization verse ray tracing or whatever. Its about using every possible tool in the tool kit to produce the best product you can and the planned id tech 6 engine will combine ray cast geometry with rasterization.
So your talking about adding elements of raytracing to rasterization and not replacing rasterization with raytracing, right? That would make more sense. We already had engines in the pre graphics accelerated 90s that succesfully combined sprites, voxels and planes into one working system.
But today, in an environment where graphics have to be accelerated in one way or another, a decision has to be made.
You can't just move non accelerated graphics features from the gpu to the main cpu because it will not deliver sufficient performance compared to the highly optimized polygon pumping gpus.
In the early days of DirectX, Microsoft promised to emulate all graphics features that are not supported by the graphics card in software.
This still doesn't work to this day. It's either software emulation OR hardware support. Lacking functionality in gpus is usually punished by omitted graphical effects (imagine a shader version mismatch in software/hardware) or the risk of a game not running at all.


Second, the "megatexture" circuitry on AMD's new gpu consists of simplified cpu processors. Its a parallel processing computer that can be used for ray casting as well. The point is, again, to figure out the best way to stream all the data to the gpu and to maximize the available bandwidth on any system. I'm sure the technical details get more complicated then that, but that's the basic gist of what they are doing.
The slowest component in a system is always the bottleneck. In the case of memory hungry Megatexture, it is size of ram and harddrive access speed.
The easiest way to get rid of these problems is to use large amounts of dedicated ram. Ram is quite cheap today so why bother about complicated systems that manage memory caching from point a to point b? The more complicated a system becomes, the more it is prone to fail...
Btw.: Do you have insider information?


There is no doubt whatsoever people want this technology and they want it to be portable. Some 80% of tablet owners already use their devices for gaming and India has just produced the first $50.oo tablet PC. Within ten years every snot nosed kid will be dragging around a cheap Walmart tablet or phone capable of playing games like Crysis and Rage. A voice activated one capable of surfing the web, reading books, helping with their homework, or finding their mother.
I don't have any doubts about that. But as i said: My view is from the perspective of performance optimization and not cost reduction.
User avatar
Manny(BAKE)
 
Posts: 3407
Joined: Thu Oct 25, 2007 9:14 am

Post » Sat May 12, 2012 10:15 pm

Is that so? Than the entire computer industry is looking for cost reduction and not performance optimization. :biggrin:

When it comes to home computing the cost/performance ratio is paramount.

I'm not talking about system ram, i'm talking about dedicated ram for graphics data. No more no less. With these modern multicore CPUs with 6 cores or more, system ram is already a bottleneck so why add additional data copying operations there when not necessary? It makes sense to move all graphics related data to a single section instead of permanently swapping it in and out.
If the bucket can't hold enough water to put out the fire, get a bigger bucket instead of permanently running between the fire and the well. :tongue:
This applies to all hardware, mobile or not. But with mobile hardware the costs are also a factor, so it is highly likely that mobile systems (and consoles) will use a cheaper approach in designing a hardware like unified memory architecture and so on.

This argument is silly. Manufacturers already squeeze as much vram onto a gpu as they can and you can go out and buy one with 6gb of vram if you can afford $5,000.oo for a video card. Again, its the cost/performance ratio that is paramount for home computing and being able to use system ram as a resource is just significantly more cost effective all around.


Again: Does this make sense from a performance viewpoint? Nope...

More silliness. If you want more vram and performance then go ahead and spend $5,000.oo on a gpu.


Which doesn't really work with about 50% of all available pc systems, assuming about half of all installed gaming PCs use ATI/AMD graphics cards.
There are several posts in this forum where people report 100% main processor load when running Rage and getting 1 frame per second since the engine doesn't make use of the installed graphics hardware.
[deleted]

So your talking about adding elements of raytracing to rasterization and not replacing rasterization with raytracing, right? That would make more sense. We already had engines in the pre graphics accelerated 90s that succesfully combined sprites, voxels and planes into one working system.
But today, in an environment where graphics have to be accelerated in one way or another, a decision has to be made.
You can't just move non accelerated graphics features from the gpu to the main cpu because it will not deliver sufficient performance compared to the highly optimized polygon pumping gpus.
In the early days of DirectX, Microsoft promised to emulate all graphics features that are not supported by the graphics card in software.
This still doesn't work to this day. It's either software emulation OR hardware support. Lacking functionality in gpus is usually punished by omitted graphical effects (imagine a shader version mismatch in software/hardware) or the risk of a game not running at all.

Microsoft is right up there with politicians when it comes to making wild promises they can never keep and often never even intend to keep. Likewise, as I already said, you can play Rage on an iPhone already. The idea isn't just portable either, but to create APUs that can run in crossfire/sli with a discrete graphics card for cheap upgradable desktops. Alternatively if you want a high powered rig you can run physics and AI on the APU to improve gaming performance.


The slowest component in a system is always the bottleneck. In the case of memory hungry Megatexture, it is size of ram and harddrive access speed.
The easiest way to get rid of these problems is to use large amounts of dedicated ram. Ram is quite cheap today so why bother about complicated systems that manage memory caching from point a to point b? The more complicated a system becomes, the more it is prone to fail...
Btw.: Do you have insider information?

LOL, no I don't have any insider information and if I did I wouldn't be blabbing it on the internet.

We can put two billion parts on a chip these days, Rage is 22gb in size, and you are arguing computers should be simpler. :blink:

I don't have any doubts about that. But as i said: My view is from the perspective of performance optimization and not cost reduction.

Then I suggest you design and build your own computer. The rest of the world is concerned with both performance and cost.
User avatar
Rachyroo
 
Posts: 3415
Joined: Tue Jun 20, 2006 11:23 pm

Post » Sun May 13, 2012 1:09 am

Let's keep on topic and avoid personal insults, please.
User avatar
Emmie Cate
 
Posts: 3372
Joined: Sun Mar 11, 2007 12:01 am

Post » Sat May 12, 2012 6:04 pm

This argument is silly. Manufacturers already squeeze as much vram onto a gpu as they can and you can go out and buy one with 6gb of vram if you can afford $5,000.oo for a video card. Again, its the cost/performance ratio that is paramount for home computing and being able to use system ram as a resource is just significantly more cost effective all around.

This argument is not silly, you didn't get my point: I was talking about the usual (affordable) amount of high speed DDR Video Ram for rendering the frame (like it is done now) plus additional (lowcost) ram made of the slower and cheaper memory modules that are usually used as main system ram. It doesn't even need to be fast or high performing. The idea is to use it as a data cache that permanently delivers data to the video ram making any DMA access from graphics hardware to main ram obsolete.
So you could actually save the money you normally use on main memory and spend it on video ram if you're a gamer.
All it would need would be an additional memory controller on the gpu and some slots for inserting the modules. This wouldn't raise costs of graphics cards up to 5000$... :biggrin:
The user himself could decide if he wanted to add more graphics or system memory or both.

Permanent swapping of data from one memory section to the next doesn't serve any positive purpose. It just generates performance problems!

I'm a graphics programmer myself and i do have some experience with programming 3D graphics on PCs. What i'm talking about here is one of the completely unnecessary concept problems that hinder performance of PCs when used as a graphics workstation/gaming platform.
PC programmers permanently complain about driver problems and driver overhead. We could reduce these problems with making the graphics hardware in PC systems work more autonomously. The biggest problem is the permanent communication and data swapping between main processor and gpu.

Imagine a system where the main the gpu just tells the graphics hardware what to draw and how via a simple display list without any unnecessary copying operations that can always be interrupted by other main processor operations. Just fire and forget!

This would prevent ALL synchronization problems between cpu and gpu and would make driver programming A LOT easier.



[deleted]
I don't know what you wrote here but describing an existinging problem shouldn't be a reason for anybody to get cocky.
My problem is that everybody expects programmers to do the workarounds for badly designed systems. And programming an engine that is running perfectly on any imaginable hardware is an absolute nightmare and not a walk in the park. There are so many possible combinations of memory size, hardware performance and system layouts it is impossible to serve these at a decent quality at the same time.

@Booty Sweat: Thanx for keeping things right to the point. :clap:

Microsoft is right up there with politicians when it comes to making wild promises they can never keep and often never even intend to keep.
Microsofts idea was a great one. But as i said, you'll have a hard time serving all possible system combinations by emulating missing features. This get's even worse when programming a complete software to work on any imaginable target platform.

Likewise, as I already said, you can play Rage on an iPhone already. The idea isn't just portable either, but to create APUs that can run in crossfire/sli with a discrete graphics card for cheap upgradable desktops. Alternatively if you want a high powered rig you can run physics and AI on the APU to improve gaming performance.

SLI is the right keyword here when talking about costs:
What is more expensive? Adding more memory where it's needed or adding another fully equipped graphics card? :happy:
Again: If you could get rid of all unnecessary bottlenecks in a system you would have more perfromance left for the stuff you really want to do instead of adding additional chips.

We can put two billion parts on a chip these days, Rage is 22gb in size, and you are arguing computers should be simpler. :blink:
The problem is not the transistor density on computer chips, it's the connections and the communication between them!
Let the graphics hardware do all graphics operations, the sound hardware all sound operations and the main processor all the rest.
That's all i'm asking for.
User avatar
Tracey Duncan
 
Posts: 3299
Joined: Wed Apr 18, 2007 9:32 am

Post » Sun May 13, 2012 1:19 am

The problem is not the transistor density on computer chips, it's the connections and the communication between them!
Let the graphics hardware do all graphics operations, the sound hardware all sound operations and the main processor all the rest.
That's all i'm asking for.

If you do graphics then you already know openGL and PRT have been used in graphics work for many years and enterprise graphics cards are huge and expensive. Since none of them that I know of use your idea the implication is that its not nearly as simple and cheap as you suggest.
User avatar
Taylor Thompson
 
Posts: 3350
Joined: Fri Nov 16, 2007 5:19 am

Post » Sat May 12, 2012 6:50 pm

I'm not talking about enterprise or "professional" graphics hardware. I'm just talking about moving ram from system memory(where it is not needed for rendering) to graphics memory(where it is needed) on your standard middle class graphics card.
Soundcards work this way for decades.
Maybe the idea is so simplistic, nobody ever thought about it.
Most users today don't need 4-8 GB of main memory in their systems except for gaming.
If you use your PC (i'm not talking about mobile hardware here) for the usual stuff like surfing the web or using it as a typewriter or for stuff like excel, 2GB main memory is more than enough.
The widely used software that is memory demanding are games so why not give games the memory where it is needed most?

As i said, you'd just need an additional memory controller and some slots to insert standard memory modules and this wouldn't be so expensive but would get rid of a lot of problems related to computer graphics on PC systems.
User avatar
Add Me
 
Posts: 3486
Joined: Thu Jul 05, 2007 8:21 am

Post » Sat May 12, 2012 6:05 pm

My kid's toys can have memory added to them, but that doesn't mean adding slower ram to a video card is nearly as cheap and effective as you have suggested. Again, if it were that cheap AND effective manufacturers would have done so long ago. Its a competitive multi-billion dollar industry that produces a huge variety of consumer products for every kind of PC imaginable, but not one that I know of uses your idea.
User avatar
Danger Mouse
 
Posts: 3393
Joined: Sat Oct 07, 2006 9:55 am

Post » Sun May 13, 2012 5:28 am

We have a common saying here in germany which translated into english goes like this:
"Eat crap! Millions of flies can't be mistaken."

In other words: Just because everybody does it in a certain way, it isn't necessarily the right (or best) way.

And if we would do all things the same way they have always been done before, we would have no progress at all.

The biggest problem the PC is suffering from in contrast to the consoles is it's unoptimized but admittedly more flexible hardware.
That's why a PC needs drivers and a console doesn't.
One of the biggest tasks (or better:problem) for graphics card drivers is memory management. The industry could get rid of this problem by considering what i described before.
Which doesn't mean that i'm so arrogant to assume anyone of them should listen to me or that i'm more intelligent then the guys working at their development sections.
They sure have their reasons for doing what they're doing the way the are doing it.
I'm just talking about my experiences in programming 3d graphics on PC.

Most PC users today are complaining about the fact that consoles deliver almost the same rendering quality as their PC which is on paper at least 10 times faster. They blame the consoles for that as they usually assume that modern games are optimized for consoles and therefore lack appropriate optimization.
WRONG!
What we see today on PCs is what the PCs of today can deliver. No more no less. Most of their theoretical speed advantage is eaten up by the driver/graphics library software layer that is ridiculously slow compared to the direct programming possibilites on consoles.

If we want faster PC graphics, we need the PC to be more optimized for this task.

If you want to draw a texture on an Xbox 360 you do this procedure:
1. Load the texture into memory (it's unified so just one memory block - no copying! Just setting a memory address pointer. The 360 is in fact a graphics card with an added main cpu).
2. Tell the graphics processor to draw the texture via your graphics engine..
3.Texture is showing up on screen

If you want to draw a texture on PC you do the following
1. Load the texture into MAIN MEMORY.
2. Tell your graphics library (DirectX or OpenGL) to draw the texture via your graphics engine.
3. Graphics library talks to graphics driver to draw the texture
4. Graphics driver asks for texture
5. Graphics driver is checking if there's enough memory in GRAPHICS RAM to hold the complete texture.
6. If yes: Texture is copied from MAIN MEMORY to GRAPHICS MEMORY / If no: Driver talks to graphics library and reports "out of memory" which has to be handled by your engine.
7. If driver works properly for the installed card: the texture shows up on screen. / If not: You get graphics errors or blue screens.

Now imagine doing this (very much simplified) process from points 2 -7 (point 1 is done once before starting the rendering) for hundreds of textures every single frame at a framerate of 60 frames per second.
The out of memory situation isn't there on consoles as you always know how much memory you have on every system worldwide as it is always the same.

If we would store all graphics data in the graphics card memory we could kill steps 4, 5 and 6 immediately.
Resulting in:
1. Load the texture into GRAPHICS MEMORY.
2. Tell your graphics library (DirectX or OpenGL) to draw the texture via your graphics engine.
3. Graphics library talks to graphics driver to draw the texture.
4. (7.before) If driver works properly for the installed card: the texture shows up on screen. / If not: You get graphics errors or blue screens.

You see:
This is the better way to do it as it prevents data busses to be choked with unnecessary texture copying every single frame which is by far the most bandwidth heavy in graphics operations.
And it get's rid of the out of memory situation as memory is checked just once BEFORE starting the rendering function and not every time the render function is making a loop (60 times a second at full speed).

Now add Megatexture to that concept and you would get very low bandwidth requirements for the additional data memory on the graphics card as you would only have to copy small tiles of the sections of the texture that has to be drawn on screen.
I don't really know how large the actual tiles are ( i heard Cramack saying this is configurable) but i assume you could use this system very effectively with very low bandwidth (meaning cheap) data buses.
User avatar
Andres Lechuga
 
Posts: 3406
Joined: Sun Aug 12, 2007 8:47 pm

Previous

Return to Othor Games