I don't like to do this but......

Post » Wed May 16, 2012 6:06 am

The cause of the problem comes from multiple sources. Often it's very much a case of Problem Exists Between Keyboard And Chair - people agressively overclocking, trying to max out settings or force them in driver control panels, and then complaining righteously and indignantly when problems happen. "But it works with everything else!" is the common refrain. Reality is that they're running something outside of it's designed tolerance levels so it's going to break sometime - it's not "if", it's "when", and the most likely time it's going to break is when they try to use something that particularly stresses those tolerance levels. (That's something our engineer OP should already be well aware of.)

When I'm feeling in a nasty mood I describe the typical "power user" as "somebody who has just about enough knowledge for it to be a dangerous thing". It's harsh but sometimes it's true.

The driver situation is a messy one. Alex St John, a DirectX evangelist at Microsoft in the late 90s had a saying that "the drivers are always broken" and I think he was right. This applies to AMD, NVIDIA, Intel, whoever you care to mention. Every 3D hardware manufacturer ships broken drivers, and they know it (look at the release notes PDF for any NV driver and you'll get a good idea of one example of this). In the real world, when you ship software (be it a driver, a game, a business app, whatever) you will always ship with bugs. Every non-trivial program has bugs. Sometimes they're bugs you don't know about because some esoteric configuration or unlikely combination of components wasn't tested. Sometimes they're bugs you do know about but they're not currently causing any problems. Sometimes bugs go into the "can't fix" or "won't fix" bracket because fixing them will cause unwanted side-effects elsewhere. You must give the user a workaround instead.

In the case of OpenGL, ATI in the old days have had quite a reputation for especially poor drivers and AMD have inherited that. OpenGL itself doesn't help with it's huge monolithic driver model and ocean liner full of legacy cruft that it must continue to carry; stuff that was mapped well enough to SGI hardware in the 1990s but is so far off the mark with modern hardware that it's not even funny. D3D imposes a layer of sanity between the program and the driver in the form of the Runtime, but that has it's own compromises. The old "yee-haw, ride 'em cowboy" days of having to code explicitly to specific hardware died for a reason. There is no perfect solution, aside from an unwanted scenario where there is one hardware manufacturer that has 95% coverage. (That, by the way, is exactly who consoles make things easier for developers - you have guaranteed the same hardware for every user, and you can tune your program to the strengths of that hardware without having to worry about breaking on another manufacturer).

This goes beyond 3D hardware. I recently solved a problem where all games were hitching and jerking every few seconds by uninstalling a Realtek ethernet controller driver. I've seen Broadcom NIC drivers bring down a server cluster.

I don't know what the answer to the situation is, but all I can do is re-emphasise the questions.
User avatar
Katie Pollard
 
Posts: 3460
Joined: Thu Nov 09, 2006 11:23 pm

Post » Wed May 16, 2012 4:42 am

Oh how I miss the good old days. Shops warning people to never install video games on their PC, the frequent BSODs, registry errors, the need to close down all background applications before running a game, and periodically re-install everything including the operating system. What were manufacturers thinking when they started "driving" down this road? Were they trying to appeal to the lowest common denominator, the insatiable human demand for instant gratification? The lazy bums. In my day we walked up hill through ten feet of snow both to and from school.

Think back to the good ole' days of Dos and VESA 2.0 graphics.
Games were running directly to hardware with their Dos extenders beyond Dos' limits.
If your graphics card couldn't handle "hi res" graphics beyond 320x200, there was just ONE SINGLE driver that made it work in almost all case: univbe.

Life (and the software we used) was simpler than... :tongue:

Too much functionality leads to too much abstraction leads to too many possible faliures.

I'm still trying to figure out why i should watch movies on my cell phone. I just want to make phonecalls with these damn things!
Similar situation with PCs. There's nothing you can't do with a pc nowadays except for making breakfast.

Maybe i'm just getting old. :laugh:
User avatar
how solid
 
Posts: 3434
Joined: Mon Apr 23, 2007 5:27 am

Post » Tue May 15, 2012 11:47 pm

The problem is the software is falling further behind the hardware. People buy the latest and greatest hardware without realizing the software they use on it is ten years outdated. With the move to multicore processing the situation is just getting worse, however, AMD is already working towards unifying the cpu and gpu memory and designing chips that to some extent thread themselves. That's the promise of heterogeneous architecture is simplifying the job of programmers so they can focus more on just writing good programs and less on making it run on every configuration of hardware imaginable. Eventually the systems will become more standardized and even hardwired, but not until the hardware has been around for awhile and the software has had a chance to catch up.
User avatar
Project
 
Posts: 3490
Joined: Fri May 04, 2007 7:58 am

Post » Wed May 16, 2012 10:27 am

The cause of the problem comes from multiple sources. Often it's very much a case of Problem Exists Between Keyboard And Chair - people agressively overclocking, trying to max out settings or force them in driver control panels, and then complaining righteously and indignantly when problems happen. "But it works with everything else!" is the common refrain. Reality is that they're running something outside of it's designed tolerance levels so it's going to break sometime - it's not "if", it's "when", and the most likely time it's going to break is when they try to use something that particularly stresses those tolerance levels. (That's something our engineer OP should already be well aware of.)

Good point i almost forgot: Games are the type of software that stresses all components of a pc pretty much to the max.
Unfortunately, it's the gamers that usually overclock their hardware beyond acceptable limits for benchmark boasting.
Bad combination...

When I'm feeling in a nasty mood I describe the typical "power user" as "somebody who has just about enough knowledge for it to be a dangerous thing". It's harsh but sometimes it's true.
+1 :biggrin:
It's perfectly clear that PCs are in fact too "tuneable" for the casual joe. A mass market machine that needs some decent knowledge to be handled correctly is prone to fail. As far as i know, Apple has way fewer problems with hardware/software failures but considerably less potential for tuning.

The driver situation is a messy one. Alex St John, a DirectX evangelist at Microsoft in the late 90s had a saying that "the drivers are always broken" and I think he was right. This applies to AMD, NVIDIA, Intel, whoever you care to mention. Every 3D hardware manufacturer ships broken drivers, and they know it (look at the release notes PDF for any NV driver and you'll get a good idea of one example of this).

I'm trying to figure out if this is a concept problem that was there right from the beginning of abstracting hardware/software and needs a complete reboot or if this can be fixed by forcing standards onto the industry at the expense of performance/flexibility and competition between manufacturers.

Would you agree that the fact that nowadays "all drivers are broken" is not really acceptable?
Imagine refueling your car at the fuel station with fuel that doesn't guarantee you to get were you want to.
No car driver(pardon the pun :biggrin: ) would accept this.

Looking at PCs, this seems to be an accepted limitation.
In fact, i can't imagine any other product were this high failure rate and lack of QA would be accepted by the consumer.


In the real world, when you ship software (be it a driver, a game, a business app, whatever) you will always ship with bugs. Every non-trivial program has bugs. Sometimes they're bugs you don't know about because some esoteric configuration or unlikely combination of components wasn't tested. Sometimes they're bugs you do know about but they're not currently causing any problems. Sometimes bugs go into the "can't fix" or "won't fix" bracket because fixing them will cause unwanted side-effects elsewhere. You must give the user a workaround instead.

Which reminds me of Carmacks keynote where he stated that Rage was heavily tested for bugs and came out as a pretty solid product.
As we know now, most problems related to Rage on PC are driver based which leads me to the main reason for asking you about your opinion:

Being a programmer, people expect you to supply a working software.
A software that has to work on all systems, in all occasions at any given situation.
If the software fails, it's the programmer's fault. I had this many times before.
It's usually the people that don't know anything about modern software architecture that are the first ones to criticize us.
That the problems for software not working as expected can be outside of the programmers responsibility is not understood.

As idiotic as it may sound for the consumer that just wants a working product for the buck:
I consider the situation with Rage and the Ati/OpenGL dilemma and Carmack's attitude towards this absolutely refreshing.

If even a big company led by one of the most respected programmers in the business can't be sure to release a working product, there must be something wrong in the process.
Imagine Ferrari selling a high powered car and Pirelli supplying tires that blow the first time you go beyond 65 miles per hour.

In the case of OpenGL, ATI in the old days have had quite a reputation for especially poor drivers and AMD have inherited that. OpenGL itself doesn't help with it's huge monolithic driver model and ocean liner full of legacy cruft that it must continue to carry; stuff that was mapped well enough to SGI hardware in the 1990s but is so far off the mark with modern hardware that it's not even funny. D3D imposes a layer of sanity between the program and the driver in the form of the Runtime, but that has it's own compromises. The old "yee-haw, ride 'em cowboy" days of having to code explicitly to specific hardware died for a reason. There is no perfect solution, aside from an unwanted scenario where there is one hardware manufacturer that has 95% coverage. (That, by the way, is exactly who consoles make things easier for developers - you have guaranteed the same hardware for every user, and you can tune your program to the strengths of that hardware without having to worry about breaking on another manufacturer).

Too many cooks spoil the broth. Especially when nobody takes the lead to bring all of them together.
I considered Microsofts idea of certifying drivers a good one although i can understand gpu manufacturers reservations towards this as they fear loss of intellectual property.
Nevertheless, i had certified drivers installed to my system that totally failed. :biggrin:

This situation can lead to being the last nail in the coffin for the pc as a gaming machine when nobody really seems to understand what's going wrong.
Most "power users" :happy: still consider gaming consoles as the main reason for pc games not working or looking properly on their "superduper ultra megahertz machine" when in reality it's the crappy situation with unreliable drivers leading to unacceptable performance overhead that just generates heat.

There's just two gpu manufacturers and two accepted graphics libraries left and this works less than 15 years ago when we had 5-6 gpu manufacturers with their respective libraries + software running without acceleration.

Maybe the business guys have to much influence on this.
As long as "power users" :happy: will buy overclocked and energy devouring gpus that deliver 10 frames per second enhancements at the expense of the product not being reliable, the business guys will sell it.
Again: This might be the final nail in the coffin for the pc as a gaming machine.

Why should any thinking person develop a game for pc with all the related technical problems, not to mention piracy, when consoles do not suffer from any of them?

This goes beyond 3D hardware. I recently solved a problem where all games were hitching and jerking every few seconds by uninstalling a Realtek ethernet controller driver. I've seen Broadcom NIC drivers bring down a server cluster.

Yep, had similiar problems some time ago with software running in the background.
Lots of hardware and software running at the same time have to share resources somehow. A problem consoles (or old Dos PCs :laugh: ) don't have either.
And yet still many pc gamers consider their system as superior.

I don't know what the answer to the situation is, but all I can do is re-emphasise the questions.

I'd love to hear more opinions on this matter. Especially from people involved in the gpu business.
User avatar
Eddie Howe
 
Posts: 3448
Joined: Sat Jun 30, 2007 6:06 am

Post » Wed May 16, 2012 8:02 am

The problem is the software is falling further behind the hardware. People buy the latest and greatest hardware without realizing the software they use on it is ten years outdated. With the move to multicore processing the situation is just getting worse, however, AMD is already working towards unifying the cpu and gpu memory and designing chips that to some extent thread themselves. That's the promise of heterogeneous architecture is simplifying the job of programmers so they can focus more on just writing good programs and less on making it run on every configuration of hardware imaginable. Eventually the systems will become more standardized and even hardwired, but not until the hardware has been around for awhile and the software has had a chance to catch up.

Including the gpu into the cpu and putting it all together into one piece might be a great idea.
As long as they don't permanently change the architecture of their chips with every generation change like they do now with their external GPUs and submit themselves to a standard like the CPUs have had for years with x86 and x64 which obviously works pretty well.
But this standard would come at the expense of performance which gpu manufacturers would obviously dislike as it is their best selling point.
User avatar
Cheryl Rice
 
Posts: 3412
Joined: Sat Aug 11, 2007 7:44 am

Post » Wed May 16, 2012 10:23 am

There's just two gpu manufacturers and two accepted graphics libraries left and this works less than 15 years ago when we had 5-6 gpu manufacturers with their respective libraries + software running without acceleration.

I think this hits on a key point. We have two main vendors and they're both agressively competing for the bottom line. Something has to give way in that scenario, and unfortunately what's given way is quality control. Getting a few extra frames in benchmarks and a checkbox on the feature list that your rival doesn't have seems to overrule everything else.

I remember back when NVIDIA first came on the market, at least really seriously (TNT2 times, they were just coming out of their own underdog status back then). We had ATI, 3DFX, Intel, S3, Matrox and a host of long-forgotten companies. The TNT2 scored on account of a great balance of performance and quality, especially when compared to the main player (3DFX) who were still stuck on 16-bit colour and couldn't run in windowed modes.

The point is though that if a vendor svcked you had plenty of other options. You could go to NVIDIA for the balance, ATI for a grab-bag of functionality, 3DFX for raw power, Matrox for image quality, etc. Nowadays that's gone - we're left with two, and the only thing really differentiating them is blind tribal loyalty.

Regarding quality, it's not a nice decision, but Microsoft and D3D have shown the way forward. Choose your standard carefully, rigidly enforce it, shove it down the vendor's throats, remove all scope for them to get up to their own funny business, and if they try to, close the loophole in the next version. OpenGL suffers badly here because it chooses the wild west option. Yes, it's great that it's extensible, and yes it's great that it's outside of any single central enforcement, but at the end of the day when the customers are the ones who suffer then something must be wrong.
User avatar
Stat Wrecker
 
Posts: 3511
Joined: Mon Sep 24, 2007 6:14 am

Post » Wed May 16, 2012 12:24 pm

Including the gpu into the cpu and putting it all together into one piece might be a great idea.
As long as they don't permanently change the architecture of their chips with every generation change like they do now with their external GPUs and submit themselves to a standard like the CPUs have had for years with x86 and x64 which obviously works pretty well.
But this standard would come at the expense of performance which gpu manufacturers would obviously dislike as it is their best selling point.

The problem is the entire computer rather then any individual chip. You might compare it to the evolution of the automobile. In a few short decades we went from a 35mph model T Ford to your average car being able to push 100mph and tow a huge trailer, but they were expensive, unsafe, and guzzled gas. It required decades more to figure out how to make them significantly cheaper, safer, and more efficient and the entire industry had to redesign the car from the ground up. We've pushed silicon close to the limits of how fast it can work and crammed two billion transistors onto a chip the size of your fingernail, but for years now most programs just haven't really run much faster because the computers were never designed from the ground up to be efficient. Its gotten so bad even the DoD has become involved and begun warning people if they don't do something within ten years even supercomputers will start to hit a wall.

Essentially they're shooting for an efficient supercomputer on a chip. Not just a SoC, but a supercomputer hardwired to thread itself to some extent and figure out the fastest way to run any program you throw at it while using as little energy as possible. It will have at least four different types of processors and use whatever works best for any given job. Eventually possibly 8 cpu cores, 300 simplified gpu processors, and who knows what else. There's already one phone SoC coming on the market with an additional simplified core just for saving energy in standby mode. You should be able to use the same basic chip design just like you can a cpu today for anything from a phone to a desktop computer or supercomputer with thousands of chips.
User avatar
Louise Andrew
 
Posts: 3333
Joined: Mon Nov 27, 2006 8:01 am

Post » Wed May 16, 2012 2:39 pm

I think this hits on a key point. We have two main vendors and they're both agressively competing for the bottom line. Something has to give way in that scenario, and unfortunately what's given way is quality control. Getting a few extra frames in benchmarks and a checkbox on the feature list that your rival doesn't have seems to overrule everything else.

Which really is an odd situation. Usually heavy competition should result in better and more affordable products.
In this case, the hardware/drivers get more and more unreliable and the theoretical plus in performance installed gets more and more out of reach.
We have teraflops of power in single chips available which can't be fully utilized due to compatibility problems on the driver side and the ridiculous amount of different gpus with different capabilities installed in home pcs. In the end, the software has to run on the majority of all systems.

I remember back when NVIDIA first came on the market, at least really seriously (TNT2 times, they were just coming out of their own underdog status back then). We had ATI, 3DFX, Intel, S3, Matrox and a host of long-forgotten companies. The TNT2 scored on account of a great balance of performance and quality, especially when compared to the main player (3DFX) who were still stuck on 16-bit colour and couldn't run in windowed modes.
The point is though that if a vendor svcked you had plenty of other options. You could go to NVIDIA for the balance, ATI for a grab-bag of functionality, 3DFX for raw power, Matrox for image quality, etc. Nowadays that's gone - we're left with two, and the only thing really differentiating them is blind tribal loyalty.

Perfectly right. Maybe the competiton is already dead and we already have a monopolistic situation with two vendors left that share the market.
Interestingly enough, this kind of situation works pretty well with cpus: Intel and AMD main processors are of high built quality and usually don't suffer any compatibility or reliability problems.


When i write a program in C/C++ without any hardware accelerated graphics, the compilers usually do a very good job in optimizing the code for the different cpu types when setting the right switches.
And even if i don't use cpu optimizations, the software will run at lower performance. But it will run.

The situation with gpus, drivers and permanently changing libraries and standards on the other hand is really annoying and can lead to extreme amounts of workload just to get the software to run AT ALL.

Maybe this might be a model for the future of pc graphics: As Wuliheron already mentioned, there's some effort in integrating the gpu functionality into the main processor.
If the involved companies could agree on implementing compatibility standard(s) like we have with cpus for decades, life would be a lot easier.

Think of a more generalized graphics library that consists of the usual graphics related functions that are aleardy implemented and standardized in the main cpu and handled inside the standard code libraries.
We had this situation years ago when there was no t&l hardware acceleration and all the transformation was done by the main cpu.
This was slower but definitely a lot easier to handle.
When hardware acceleration moved from fixed function pipelines to shaders, the s**t hit the fan in my opinion.
I can't tell how it works in OpenGL(not that much experience with it) but in DirectX before shaders this was just calling a function with the right parameters to switch between software and hardware mode.
And you could be quite sure it was a yes or no situation.

Today, the first thing i have to do in my code is to retrieve all graphics hardware related data to execute the code base for the right hardware situation.
Stupidly enough, this process is as unreliable as it can possibly get and sometimes doesn't work at all leading to crashes and BSODs.

In fact, the process of retrieving info about the hardware by code is sometimes more complicated and error prone than designing the transformation code itself which is just nothing else than pure math and should work on any given hardware.

And this is one of the main problems today:
It consumes more time to get the software running in all occasions than you would need for optimizing your algorithms.
Like filling water into a bucket full of holes...

This is where the industry should rethink their methods.
In the end we have users buying expensive hardware which power can't be utilized due to systematic problems related to the crappy interface preventing it to be used.
Like Carmack already said: We have 10 times the power in pcs compared to consoles but so much overhead that most of it is eaten away.

Regarding quality, it's not a nice decision, but Microsoft and D3D have shown the way forward. Choose your standard carefully, rigidly enforce it, shove it down the vendor's throats, remove all scope for them to get up to their own funny business, and if they try to, close the loophole in the next version. OpenGL suffers badly here because it chooses the wild west option. Yes, it's great that it's extensible, and yes it's great that it's outside of any single central enforcement, but at the end of the day when the customers are the ones who suffer then something must be wrong.

As far as i know, the extensibility of OpenGL and the open nature of it is OpenGLs biggest disadvantage as nobody really knows anymore what can be done and what can't.
I went for DirectX5 instead of OpenGL years ago for exactly that reason. I wanted a reliable standard for programming my software as i thought it would be easier to make it run on all hardware.
But today, they change so many things from one version to the next that this advantage is completely eaten away.

I'm beginning to understand why game companies today need so many employees. It's almost impossible to develop graphics solutions of any kind without ridiculous amounts of people that do testing and bug squashing.

Writing a competitive graphics engine can be done by one single programmer in acceptable time even by today's standards.
But all the problems related to make this run on all possible systems eats away far too many resources.
User avatar
Solina971
 
Posts: 3421
Joined: Thu Mar 29, 2007 6:40 am

Post » Tue May 15, 2012 10:53 pm

@Wuliheron
Since you're primarily interested in the hardware side, i can assure you that all of this otpimizations and efforts in the hardware business won't be of any use if the software doesn't hold up.
I can write crappy code on the fastest imaginable hardware that makes it run like you're using a Commodore C64.
Same with energy efficiency: If i permanently push the hardware with unecessary code execution, the chips will run hot and the bill for electrical power will raise.

This is what we see today with pcs and especially with graphics hardware/driver problems. Lots of the theoretical power is eaten away with bad interfaces and bad software that connects to the hardware. There's so many unnecessary function calls and memory operations it really hurts performance.
Performance that is thrown away due to bad optimization in the interfaces and the standards the programmer has to work with.

Looks like the hardware is evolving way faster than the related software interfaces...
User avatar
Enie van Bied
 
Posts: 3350
Joined: Sun Apr 22, 2007 11:47 pm

Post » Wed May 16, 2012 3:13 am

@Wuliheron
Since you're primarily interested in the hardware side, i can assure you that all of this otpimizations and efforts in the hardware business won't be of any use if the software doesn't hold up.
I can write crappy code on the fastest imaginable hardware that makes it run like you're using a Commodore C64.
Same with energy efficiency: If i permanently push the hardware with unecessary code execution, the chips will run hot and the bill for electrical power will raise.

This is what we see today with pcs and especially with graphics hardware/driver problems. Lots of the theoretical power is eaten away with bad interfaces and bad software that connects to the hardware. There's so many unnecessary function calls and memory operations it really hurts performance.
Performance that is thrown away due to bad optimization in the interfaces and the standards the programmer has to work with.

Looks like the hardware is evolving way faster than the related software interfaces...

Think of it as the equivalent of the oil crisis; the "silicon crisis". If manufacturers could find a cheap way to produce 100Ghz graphene chips tomorrow they would forget all about efficiency and happily go back to figuring out the latest bells and whistles they could add to their products using the new technology. They would continue to encourage programmers to pump out the most outrageous inefficient code and create as many layers of abstraction as humanly possible. The problem is we don't have a cheap way to produce graphene chips and the entire industry has spent trillions of dollars building factories to make silicon chips. The only way for them to remain competitive short of pulling a rabbit out of their hats is to make their products more efficient from the ground up and then eventually force programmers to write more efficient code just as the automobile industry had to redesign cars for efficiency and congress had to start taxing gas and reduce the speed limit to 55mph.
User avatar
Jessie
 
Posts: 3343
Joined: Sat Oct 14, 2006 2:54 am

Post » Wed May 16, 2012 8:54 am

Driver quality was the main reason why I switched. I have two current projects - one D3D11 and the other OpenGL. The D3D11 work is a pure pleasure, whereas OpenGL is just a constant battle. Extensions don't help; I'm trying to keep a single backend rendering path but I'm constantly being hit by having to check for extensions and diverge in order to get performance. Sometimes I just emulate a newer extension in software - many of the glDrawElements variants can be emulated by the original function, so if the entry point doesn't exist I point it at my emulated version instead. Nasty stuff. But that aside, driver quality is still significantly lower.

I've done quite a bit of work recently with OpenGL shaders using the old assembly language extensions, mainly because I could never get any kind of reliability with GLSL (I wasn't able to test locally on much variant hardware at the time, our team is quite geographically dispersed) and I needed to run on NV, AMD and Intel (one of the team leaders is very strong about "it must run on older hardware" too - ugh). They work and they're not a total madhouse like all the recent GLSL variants.

In my mind 1.4 was the last truly great GL_VERSION. GL_ARB_vertex_buffer_object was where things went wrong; instead of having a single simple well-defined and predictable (that's an important word) interface to vertex buffers it was really fuzzy, quite unclear about what to do and when, and more often than not actually ran slower than plain old vertex arrays. Doom 3 didn't help - it was very weird in the way it used vertex buffers (no discard, no no-overwrite, respecifying all buffer content each frame) and drivers had to work with that usage pattern rather than the way buffers are meant to be used. Since then OpenGL's been a moving target, with each successive version adding as many new extensions to correct mistakes made in 1.5 and 2.0 as it adds for new functionality. But yet these mistakes are legacy baggage that must still be carried in every driver.

No wonder things are such a mess.

I'm quite comfortable with vertex buffers and shaders in D3D9 and 11. I'd never go back to fixed pipeline/immediate mode at this stage.
User avatar
Javaun Thompson
 
Posts: 3397
Joined: Fri Sep 21, 2007 10:28 am

Post » Wed May 16, 2012 2:17 am

...They would continue to encourage programmers to pump out the most outrageous inefficient code and create as many layers of abstraction as humanly possible.
This is THE mother of all problems. Hardware companies want to sell their products. For graphics cards, there are only two selling points:
Functionality and performance.

There's not much to do at the moment with adding functionality. Modern gpus are capable of doing pretty much all media related stuff in hardware.
They even added sound support for HDTV and Bluray playback.

What's left is adding performance as a selling point which leads to, as you said, inefficient code as stupid as it may sound at first.
The faster the hardware, the easier it is to get acceptable frame rates without the need for heavy optimization.
This results in software architecture progressing slower than the hardware it is running on.

This is in fact not really a new problem.
The good ole assembler code back in the days was way more efficient in terms of memory usage and per clock operations than the C language that came later on to make programming easier. Easier at the expense of efficiency.
Modern C/C++ compilers are very good at generating machine code so performance is not really a problem anymore but what we see now is another change from IMHO pretty easy and efficient C code to the even easier languages like C# or Java which are pretty widespread but more inefficient than C/C++ resulting in wasted performance. Again.

In short: the faster the hardware, the slower the software. Weird...


The only way for them to remain competitive short of pulling a rabbit out of their hats is to make their products more efficient from the ground up and then eventually force programmers to write more efficient code just as the automobile industry had to redesign cars for efficiency and congress had to start taxing gas and reduce the speed limit to 55mph.
If they would only do that.
When i look at "high performance" graphics cards with unbearably noisy fans and ridiculously high energy consumption, i'd say they don't know what they're doing.
This applies even more when adding additional graphics cards in SLI mode which does nothing else but enhancing framerates.
Which leads to: Right! Inefficient software.

It's this nonsense that generates the need for higher wattage power supplies.
Nothing else in a pc eats up so much energy with so few improvement.
User avatar
Dale Johnson
 
Posts: 3352
Joined: Fri Aug 10, 2007 5:24 am

Post » Wed May 16, 2012 12:31 pm

Driver quality was the main reason why I switched. I have two current projects - one D3D11 and the other OpenGL. The D3D11 work is a pure pleasure, whereas OpenGL is just a constant battle. Extensions don't help; I'm trying to keep a single backend rendering path but I'm constantly being hit by having to check for extensions and diverge in order to get performance. Sometimes I just emulate a newer extension in software - many of the glDrawElements variants can be emulated by the original function, so if the entry point doesn't exist I point it at my emulated version instead. Nasty stuff. But that aside, driver quality is still significantly lower.
Sounds like working at a construction site. :biggrin: This is what i meant: You waste your time doing workarounds for problems based on lack of proper maintanance with OpenLG instead of concentrating on designing algorithms that actually do calculations for the program itself.
In my opinion, designing software should be a process that concentrates on the base math that solves the problems for the user.
If the programming environment itself is becoming a problem, there's definitely something wrong.

I've done quite a bit of work recently with OpenGL shaders using the old assembly language extensions, mainly because I could never get any kind of reliability with GLSL (I wasn't able to test locally on much variant hardware at the time, our team is quite geographically dispersed) and I needed to run on NV, AMD and Intel (one of the team leaders is very strong about "it must run on older hardware" too - ugh). They work and they're not a total madhouse like all the recent GLSL variants.
Which again shows the problem: As far as i know, GLSL (like HLSL in D3D) was meant to be as a simplification for the programming process to add full programming flexibility without the need for directly handling the shader registers via assembler.
If it doesn't work, there's no real reason for it to exist. Standards should be solid and no patchwork.
With my own engine running on D3D9(still running xp so not able to go beyond that as i can't test it), i try to avoid anything that might endanger reliability.
I do custom graphics presentation stuff (virtualization of products still in development / haven't been built yet) that has to work on any given hardware including older notebooks or dated pcs.
You never know what kind of hardware a customer is using for your software to run on.
Imagine the chief of a company doing some 3D presentation with your software to sell a future product to a potential customer when it crashes with a BSOD. :blink:
In that kind of situation it doesn't help to recommend getting a new pc or file in a bug report... :laugh:

In my mind 1.4 was the last truly great GL_VERSION. GL_ARB_vertex_buffer_object was where things went wrong; instead of having a single simple well-defined and predictable (that's an important word) interface to vertex buffers it was really fuzzy, quite unclear about what to do and when, and more often than not actually ran slower than plain old vertex arrays.
Vertex buffers in D3D were always pretty straight forward.
As you'll know, MS even added additional mesh handling functions (ID3DXMesh) later that set up the vertex buffers automatically for the object with few additional function calls.

Doom 3 didn't help - it was very weird in the way it used vertex buffers (no discard, no no-overwrite, respecifying all buffer content each frame) and drivers had to work with that usage pattern rather than the way buffers are meant to be used.
Sounds similar to what D3D9 is doing with ID3DXMesh. I guess this method is slower (permanent copy operations even if object is already in graphics memory) but leads to safer code exectution and more efficient memory usage as memory is refilled each loop with only the necessary content.
If graphics memory is full, you can render the content in it, discard it when finished and load the next chunk of graphics data for rendering until all data for the frame has been rendered.
When using this method, there's no need for checking the actual size of the available memory and you'll never run into out of memory situations.
Not the fastest but a simple and reliable way to do it as graphics memory size will only become a problem if data for a single object is too large to fit into graphics memory at once.


Since then OpenGL's been a moving target, with each successive version adding as many new extensions to correct mistakes made in 1.5 and 2.0 as it adds for new functionality. But yet these mistakes are legacy baggage that must still be carried in every driver.

No wonder things are such a mess.
I'm starting to wonder why Rage is running at all. :laugh:

I'm quite comfortable with vertex buffers and shaders in D3D9 and 11. I'd never go back to fixed pipeline/immediate mode at this stage.

I don't have any problems with using vertex buffers or shaders. Going back to fixed function was not my proposal. I just mentioned it to show that life was easier back then. You just switched it on and it worked or not.
My wish would be a fully standardized and fully flexible hardware shader unit implemented in modern graphics hardware without the permanent need to check for available shader versions.
You can do any imaginable calculation on a main cpu without having to think about it's capabilities. It just gets slower when the cpu needs more cycles to calculate the math.
Shader code usually needs to be programmed for the respective shader versions.
If your code is running on older hardware, it will fail if you don't have any fallback code that might replace it.

My main point is that life would be easier for me as a programmer if i could just concentrate on the math and could ignore the underlying hardware as this is the process that slows down development tremendously.
Additionally, it would make more sense to sell the user faster hardware.
Faster hardware-> faster software. But if necessary, it would also run on slower hardware. (Backward) Compatibility was once the main selling point for PCs.
User avatar
Nick Pryce
 
Posts: 3386
Joined: Sat Jul 14, 2007 8:36 pm

Post » Wed May 16, 2012 7:47 am

If they would only do that.
When i look at "high performance" graphics cards with unbearably noisy fans and ridiculously high energy consumption, i'd say they don't know what they're doing.
This applies even more when adding additional graphics cards in SLI mode which does nothing else but enhancing framerates.
Which leads to: Right! Inefficient software.

It's this nonsense that generates the need for higher wattage power supplies.
Nothing else in a pc eats up so much energy with so few improvement.

LOL, its gotten so ridiculous the latest trend is submerging the whole computer in mineral oil and recently someone demonstrated a fan with built in active noise canceling. Enthusiasts have built overpowered rigs and gone to extremes cooling them before, but the reason more people are doing it today is because the electronics are getting cheaper. For about $1,500.oo you can build a rig that will crush any game on the market. However, that trend also means that eventually something cheaper, quieter, and more energy efficient will do the same.
User avatar
Mr. Allen
 
Posts: 3327
Joined: Fri Oct 05, 2007 8:36 am

Post » Wed May 16, 2012 7:13 am

@Wuliheron:
Remember the times when no fans for cooling were necessary? Ahhh.... sweet silence! :biggrin:
User avatar
Thomas LEON
 
Posts: 3420
Joined: Mon Nov 26, 2007 8:01 am

Post » Wed May 16, 2012 4:41 am

@Wuliheron:
Remember the times when no fans for cooling were necessary? Ahhh.... sweet silence! :biggrin:

You can still build a fanless PC if you want, however, I'm just hoping the rotosub active noise canceling fans come on the market soon and don't cost an arm and a leg. With those kinds of fans and the right case the thing should be so quiet I'll never hear it from under my desk.
User avatar
Solina971
 
Posts: 3421
Joined: Thu Mar 29, 2007 6:40 am

Post » Wed May 16, 2012 10:50 am

Once in a while i turn on my old N64. This considerably old piece hardware doesn't make any noise at all since it doesn't have fans or moving drives.
Wonderful! :biggrin:

I hope SSD drives, USB memory and NO fans at all will turn out to be standard in PCs in the near future.
Imagine a high powered multi functional PC under your desk that's as quiet as a netbook.
User avatar
Eddie Howe
 
Posts: 3448
Joined: Sat Jun 30, 2007 6:06 am

Post » Wed May 16, 2012 3:33 am

Intel's upcoming Haswell might fit the bill at 15 watts. By then they might even be putting phase change ram on the thing
Once in a while i turn on my old N64. This considerably old piece hardware doesn't make any noise at all since it doesn't have fans or moving drives.
Wonderful! :biggrin:

I hope SSD drives, USB memory and NO fans at all will turn out to be standard in PCs in the near future.
Imagine a high powered multi functional PC under your desk that's as quiet as a netbook.

It won't be under your desk. You'll only need 3 chips for the entire system: SSD, system ram, and computer. Intel's upcoming Haswell will require all of 15 watts and if you want to upgrade performance and do crossfire you'll simply add a 4th chip to the mobo. No more oversized power supplies or cases required and at that point they will no longer be "rigs", just upgradable machines you eventually replace.
User avatar
Adrian Powers
 
Posts: 3368
Joined: Fri Oct 26, 2007 4:44 pm

Post » Wed May 16, 2012 1:35 pm

^ Sounds interesting. I'll have a closer look. :clap:
User avatar
Life long Observer
 
Posts: 3476
Joined: Fri Sep 08, 2006 7:07 pm

Post » Wed May 16, 2012 1:51 am

^ Sounds interesting. I'll have a closer look. :clap:

AMD's trinity is the more architecturally interesting of the new chips coming out and will be on the market this year, but the big question is when will they be able to add the ram. At most they're a couple of years away from being able to do so we could see the first all-in-one PC capable of respectable graphics any time now.
User avatar
Chavala
 
Posts: 3355
Joined: Sun Jun 25, 2006 5:28 am

Post » Tue May 15, 2012 11:15 pm

Anybody heard of http://en.wikipedia.org/wiki/Raspberry_Pi?

It only works with the technology thats currently available but even on this little device you can play Quake III smoothly at a 1080p resolution and it uses just 3.5 watts of power. Impressive.
User avatar
Steve Bates
 
Posts: 3447
Joined: Sun Aug 26, 2007 2:51 pm

Post » Wed May 16, 2012 12:26 pm

Anybody heard of http://en.wikipedia.org/wiki/Raspberry_Pi?

It only works with the technology thats currently available but even on this little device you can play Quake III smoothly at a 1080p resolution and it uses just 3.5 watts of power. Impressive.

Yeah, you can also buy computers that come thumb drives. If you think that's impressive just wait another 3 years.
User avatar
x a million...
 
Posts: 3464
Joined: Tue Jun 13, 2006 2:59 pm

Post » Wed May 16, 2012 8:08 am

Interesting historical note: http://www.team5150.com/~andrew/carmack/johnc_plan_2000.html#d20000307
User avatar
Trevi
 
Posts: 3404
Joined: Fri Apr 06, 2007 8:26 pm

Post » Wed May 16, 2012 5:49 am

Interesting historical note: http://www.team5150.com/~andrew/carmack/johnc_plan_2000.html#d20000307

Wow, eleven years ago and he already had a well developed concept of hardware acceleration.
User avatar
Sarah Kim
 
Posts: 3407
Joined: Tue Aug 29, 2006 2:24 pm

Post » Wed May 16, 2012 4:33 am

Carmack:"The primary problem is that textures are loaded as a complete unit, from the smallest mip map level all the way up to potentially a 2048 by 2048 top level image. Even if you are only seeing 16 pixels of it off in the distance, the entire 12 meg stack might need to be loaded."
The main reason for using megatexture that splits up texture data into tiles copying only the tiles and not the complete texture data into video memory!
Great for efficent memory usage.
But the two main gpu designers obviously never thought about that one before.

Although this is not really new. PowerVR chips already use tile based rendering for almost two decades now to reduce bandwidth requirements.
Looks like this technology is unpopular among the hardware designers/programmers due to lack in understanding.
Introducing new technologies is always a lengthy and painful process....
User avatar
Chloé
 
Posts: 3351
Joined: Sun Apr 08, 2007 8:15 am

PreviousNext

Return to Othor Games