What is the best pc to play skyrim on?

Post » Sat May 05, 2012 2:29 pm

I'd say that there wasn't a ton of reason behind pouring resources into these tools in the 80's when the hardware market couldn't use it. Now that it's a viable alternative we should expect to see it becoming worth the investment to develop for. That's when you see breakthroughs, when it's worth it. What good would multithreading have done 80's computers that weren't made for it? It'd be weird to have the software before the hardware, honestly.
I am afraid I disagree with almost everything you say here. :)

Parallel computing has been the subject of research since the fifties. That's 60+ years ago. My countryman http://en.wikipedia.org/wiki/Edsger_W._Dijkstra has done http://en.wikipedia.org/wiki/Semaphore_%28programming%29 already long time ago. And there have been machines for parallel computing since ages too. Whether it is a multi-core CPU, a motherboard with multiple CPUs, a supercomputer with many CPUs, or a true distributed system (CPUs connected by networks), the underlying algorithms and problems and solutions share the same basic principles. We have semaphores, message passing, shared memory, and a whole bunch of other primitives that we can use when programming parallel algorithms. But they gives us only very basic tools. It's like having a hammer, a saw, and nails. And then we'll have to build a Cathedral. We know it can be done, but it takes a lot of thinking, and it keeps tricky to not make mistakes.

So there has always been work at parallel programming. Both from the research community, but also from practical people. And still it's not as easy to use new tools as it was moving from C to C++. Or Pascal to Java. I don't believe that after 60 years of research, suddenly easy solutions will pop up.

You think "It'd be weird to have the software before the hardware, honestly." ? I think the exact opposite. Why build hardware if you don't know what to do with it ? I think the order has always been: somebody needed to solve a problem. somebody (else) came up with a way (an algorithm) to solve the problem. Then somebody has to implement that new solution (write a program). Then you need to find hardware to run your program on. What CPUs do hasn't changed that much over the last 50 years. They have an instructionset that hasn't changed that much (still add, multiply, compare, jump, store, fetch, etc). CPUs have changed much, but what they offer to a programmer is basically still the same. Only faster.

I'd also like to revisit your 4 core vs. 8 core argument that no games use 5 or more corse. I'd like to point out that until a few years ago, no game used more than 2 cores and before that, no game used more than 1. Technology is progressing and 8 cores aren't that far from being standard as intel keeps advancing the chains of progress. Can't games be made to utilize 4 or 8 cores or is it tied to one or the other?
Yes, games can be made to use 8 threads. But my point was: that's not easy. Even today, there is only a minority of games that make proper use of 4 threads. The majority still uses 2 threads. Skyrim is an example. I hope we will see more games use 4 or 8 threads, but I don't expect to see that within the next 3-4 years. So imho buying a 4-core CPU (without HT) today is a good choice for gaming.

Small example why it isn't that easy.

Suppose you have a CPU with 1 core. It can do 1 billion instructions/second.
Let's ignore the GPU at the moment.
Suppose a program (a game) needs 100 million instructions to render 1 frame.
That means our game will run at 10 frames/second.

Now suppose we have 2 cores. Still 1 billion instructions/second per core.
To make use of that, we need to split our program into 2 threads.
Suppose we can split the program in these threads:
1) render thread. requires 50 million instructions to render 1 frame.
2) audio thread. requires 30 million instructions to render 1 frame.
3) AI thread. requires 20 million instructions to render 1 frame.
We then run thread1 on core-A, and let core-B be used by thread2 and thread3.
As you can see, both core-A and core-B will be done twice as fast.
In stead of needing 100ms (with 1 core), we are now done after 50ms.
The frame/sec goes up from 10 fps to 20 fps !
Awesome.

Now suppose we get a 4-core system.
We need 4 threads. Thread1 (the render thread) is very complicated. We can't split it easily.
But the AI thread can be split. So we get 2 AI threads now, 3A and 3B, each requiring 10 million instructions.
Let's start the 4 threads on our 4 cores.
Thread1 still requires 50 million instructions. It will take 50 milliseconds to finish.
Thread2 still requires 30 million instructions. It will run for 30 milliseconds. And then the core will be idle for 20 milliseconds.
Thread3A requires 10 million instructions. It will run for 10 milliseconds. And then the core will wait for 40 milliseconds.
Thread3B requires 10 million instructions. It will run for 10 milliseconds. And then the core will wait for 40 milliseconds.

Result: the slowest thread (1) still requires 50 milliseconds to finish.
Your fps will again be 20 fps.
We went from 2 cores to 4 cores, and gained no fps. And all 4 cores were in fact in use !

Now we put in a lot of work, and split the render-thread (thread1) into 2 new threads.
Thread1-A will need 30 million instructions, thread1-B 20 million instructions.
We know run thread-1A on core-A, thread-1B on core-B, thread-2 on core-B, and threads 3A and 3B on core-D.
Result, both core-A and core-C will finish after 30 milliseconds. core-B and core-D will finish after 20 milliseconds.
We now have all threads finished after 30 milliseconds. Our fps went up from 20 to 33 !

I hope you get the picture now.
Suppose we get an 8-core CPU.
We need to split our threads again. This will become harder and harder each time.
Suppose we can split the AI-threads into 10 little threads, each requiring 2 million instructions each. Will this help us ? No. Because the slower 2 threads will finish after 30 milliseconds. So we need to split both threads 1A and 2 into smaller threads. Will that be doable ? If so, then we are saved. But for some programs, there will be core-functionality that can maybe not be split into smaller threads. What if the render-thread 1A can not be split ? We can split all the other threads in smaller ones. But that won't help at all. We will only get higher performance if we can split the heaviest thread(s). If you can't split all heavy threads into kinda equal-sized chunks, then more cores will not help at all.

I think this is a problem games have. Most games have some functionality that can not easily be split into smaller threads. You can split off AI, you can split off audio-handling. Maybe a few more smaller tasks. But there is some large rendering work that can not easily be split. I guess if you want to do that, you have to re-design your engine from the ground up. And few engines have done that yet. There will be more, but that takes time. And even then, you can only split a program into multiple threads until you run into the wall where there is one thread that you can't split further.

Anyway, this has become a lecture. Time to stop boring you. :)
User avatar
Charles Weber
 
Posts: 3447
Joined: Wed Aug 08, 2007 5:14 pm

Post » Sat May 05, 2012 2:07 am

Spoilers for space
Spoiler
I am afraid I disagree with almost everything you say here. alt=" :smile:" src="http://www.gamesas.com/images/smilie/smile.png"
data-cke-saved-src="http://www.gamesas.com/images/smilie/smile.png">

Parallel
computing has been the subject of research since the fifties. That's 60+ years
ago. My countryman rel="nofollow external" data-cke-saved-href="http://en.wikipedia.org/wiki/Edsger_W._Dijkstra">Edsger
Dijkstra
has done rel="nofollow external" data-cke-saved-href="http://en.wikipedia.org/wiki/Semaphore_%28programming%29">some
research
already long time ago. And there have been machines for parallel
computing since ages too. Whether it is a multi-core CPU, a motherboard with
multiple CPUs, a supercomputer with many CPUs, or a true distributed system
(CPUs connected by networks), the underlying algorithms and problems and
solutions share the same basic principles. We have semaphores, message passing,
shared memory, and a whole bunch of other primitives that we can use when
programming parallel algorithms. But they gives us only very basic tools. It's
like having a hammer, a saw, and nails. And then we'll have to build a
Cathedral. We know it can be done, but it takes a lot of thinking, and it keeps
tricky to not make mistakes.

So there has always been work at parallel
programming. Both from the research community, but also from practical people.
And still it's not as easy to use new tools as it was moving from C to C++. Or
Pascal to Java. I don't believe that after 60 years of research, suddenly easy
solutions will pop up.

You think "It'd be weird to have the software
before the hardware, honestly." ? I think the exact opposite. Why build hardware
if you don't know what to do with it ? I think the order has always been:
somebody needed to solve a problem. somebody (else) came up with a way (an
algorithm) to solve the problem. Then somebody has to implement that new
solution (write a program). Then you need to find hardware to run your program
on. What CPUs do hasn't changed that much over the last 50 years. They have an
instructionset that hasn't changed that much (still add, multiply, compare,
jump, store, fetch, etc). CPUs have changed much, but what they offer to a
programmer is basically still the same. Only faster.
There being work done does not mean that significant resources are being poured into it. There is a big difference between a number of people dikeing around on a side project and several large corporations pouring money into R&D for something. Computer science has also changed drastically since the 50's, it might as well have been a different field. We haven't had publically available hyperthreading capable hardware until 2002 and then not very much after that (until 2008 with Nehalem i7). In 2002 it was difficult for users to decide whether or not to enable hyper threading because their software wasn't optimized for it at the time, so there wasn't the necessary push immediately but software eventually came around now that the hardware existed.

As for your disagreement on hardware following software, this hasn't historically been the case. You get a beefy piece of hardware and then developers slowly make software to make it better and better. There was no need to make OS's capable of 4GB+ until having more than 4GB in one machine became possible and is now the norm. You wouldn't make a video game that requires a 16 core processor unless such a processor existed or you'd have no market. The reason software can't drive hardware is because there's no need for software to be made if it can't be used. I can see some developers creating something merely included in their software with the mindset that technology will become available to use it, but that's them looking at possible trends and not expecting to push the market. Just to be ready for it.


Yes, games can be made to use 8 threads. But my point was: that's not easy. Even today, there is only a minority of games that make proper use of 4 threads. The majority still uses 2 threads. Skyrim is an example. I hope we will see more games use 4 or 8 threads, but I don't expect to see that within the next 3-4 years. So imho buying a 4-core CPU (without HT) today is a good choice for gaming.
We already see games using 4 threads, that number will only increase and perhaps become the norm. I expect to see at least a handful of games in the next 5 years to be capable of using 8 threads. How quickly it happens will sadly depend on the next generation of console. If they have hyper threading then yes, this will happen almost immediately, if not, then these sorts of things will be significantly rarer. It is rather disappointing how consoles hold the market back.

Spoiler
Small example why it isn't that easy.

Suppose you have a CPU with 1 core.
It can do 1 billion instructions/second.
Let's ignore the GPU at the
moment.
Suppose a program (a game) needs 100 million instructions to render 1
frame.
That means our game will run at 10 frames/second.

Now suppose
we have 2 cores. Still 1 billion instructions/second per core.
To make use of
that, we need to split our program into 2 threads.
Suppose we can split the
program in these threads:
1) render thread. requires 50 million instructions
to render 1 frame.
2) audio thread. requires 30 million instructions to
render 1 frame.
3) AI thread. requires 20 million instructions to render 1
frame.
We then run thread1 on core-A, and let core-B be used by thread2 and
thread3.
As you can see, both core-A and core-B will be done twice as
fast.
In stead of needing 100ms (with 1 core), we are now done after
50ms.
The frame/sec goes up from 10 fps to 20 fps !
Awesome.

Now
suppose we get a 4-core system.
We need 4 threads. Thread1 (the render
thread) is very complicated. We can't split it easily.
But the AI thread can
be split. So we get 2 AI threads now, 3A and 3B, each requiring 10 million
instructions.
Let's start the 4 threads on our 4 cores.
Thread1 still
requires 50 million instructions. It will take 50 milliseconds to
finish.
Thread2 still requires 30 million instructions. It will run for 30
milliseconds. And then the core will be idle for 20 milliseconds.
Thread3A
requires 10 million instructions. It will run for 10 milliseconds. And then the
core will wait for 40 milliseconds.
Thread3B requires 10 million
instructions. It will run for 10 milliseconds. And then the core will wait for
40 milliseconds.

Result: the slowest thread (1) still requires 50
milliseconds to finish.
Your fps will again be 20 fps.
We went from 2
cores to 4 cores, and gained no fps. And all 4 cores were in fact in use
!

Now we put in a lot of work, and split the render-thread (thread1) into
2 new threads.
Thread1-A will need 30 million instructions, thread1-B 20
million instructions.
We know run thread-1A on core-A, thread-1B on core-B,
thread-2 on core-B, and threads 3A and 3B on core-D.
Result, both core-A and
core-C will finish after 30 milliseconds. core-B and core-D will finish after 20
milliseconds.
We now have all threads finished after 30 milliseconds. Our fps
went up from 20 to 33 !

I hope you get the picture now.
Suppose we get
an 8-core CPU.
We need to split our threads again. This will become harder
and harder each time.
Suppose we can split the AI-threads into 10 little
threads, each requiring 2 million instructions each. Will this help us ? No.
Because the slower 2 threads will finish after 30 milliseconds. So we need to
split both threads 1A and 2 into smaller threads. Will that be doable ? If so,
then we are saved. But for some programs, there will be core-functionality that
can maybe not be split into smaller threads. What if the render-thread 1A can
not be split ? We can split all the other threads in smaller ones. But that
won't help at all. We will only get higher performance if we can split the
heaviest thread(s). If you can't split all heavy threads into kinda equal-sized
chunks, then more cores will not help at all.
I think this is a problem games have. Most games have some functionality that can not easily be split into smaller threads. You can split off AI, you can split off audio-handling. Maybe a few more smaller tasks. But there is some large rendering work that can not easily be split. I guess if you want to do that, you have to re-design your engine from the ground up. And few engines have done that yet. There will be more, but that takes time. And even then, you can only split a program into multiple threads until you run into the wall where there is one thread that you can't split further.

Anyway, this has become a lecture. Time to stop boring you. :smile:
You're not boring me by any stretch of the imagination. I find this very intresting. I already had a decent grasp on hyper-threading but that's got to be the best explanation I've seen.

Still, it doesn't really prove anything. You're just saying that it's difficult to do, not impossible. As the technology continues to be available, developers will get more and more comfortable with it. Now, logically, if there are 8 threads then the ideal would be to split every component by 8 (if some are split by 8 then it will always be advantageous to split all by 8 if possible). As splitting technology/methodology becomes better and more common, we'll see these kinds of goals being reached more regularly. The problem we're facing right now is figuring out how to split those functionalities or how to design them in such a way where they may be easily split. It would be naive to say it can't be done. It's only a matter of "when". As I said above, the nature of the next gen consoles will go a long way to enable or hinder this timeline.

I'm sure people thought the 4 threads would never really be used, but they are.
User avatar
stevie trent
 
Posts: 3460
Joined: Thu Oct 11, 2007 3:33 pm

Post » Sat May 05, 2012 6:13 am

Sheesh. I need to learn to be more condensed .....

There being work done does not mean that significant resources are being poured into it. There is a big difference between a number of people dikeing around on a side project and several large corporations pouring money into R&D for something.
I disagree.
Decades of research is not "dikeing around on the side". Research work in networking in the seventies, is what gave us the Internet 20 years later. Work in distributed systems in the eighties is giving us lots of new technology we see deployed now. And the fundamentals were laid even earlier. Like I said, Dijkstra came up with a lot of fundamental algorithms in the fifties.

And IMHO, pouring money into research guarantees you nothing. Zilch. To build something new, you first need to have an idea how to solve the problem. An algorithm. Certainly when talking about parallel computing. You can't force algorithms with just hiring more people. You just need one, or a few people, to come up with a bright and new idea. You can't force that. It is my belief that 1 real smart guy can come up with more new ideas than a 1000 mediocre people. Even in software development, 1 good programmer is worth more than 10 or 20 mediocre ones.

Pouring money helps maybe when you have to implement something. When you already know what you want, and know how to do it. Then it can help to have more people iron out the little details. But with something fundamental as coming up with new paradigms or tools to do better parallel programming, having lots of people working on it won't help much.

Note that building new CPUs is not the same as writing software. Building chips requires a lot of work. And everything has to be tested to the max. With software you can always give out patches with bugfixes later. That won't work for hardware. I'll give you another example: 10+ years ago I was working for a startup. Building a new box with high-tech chips in it. And software running on a control plane. The big edge of the company was that we had a few software guys who knew how to build their software, where it was rumoured very few people knew how to do that. We had, say, 10 of those guys, and another 10 software developers doing supporting stuff. And then we had 200+ engineers developing 5 new chips ! And a machineroom full with expensive boxes to do the verification for the chips. The big difference for the company was those 10 software guys. But 95% of the development budget was poured into hardware development. That's the way it is: hardware requires a lot of people and a lot of time. But the true breaktroughs are done in the software.

We haven't had publically available hyperthreading capable hardware until 2002
Completely irrelevant.
We've had multi-processor systems since decades. You need to program those with parallel programming. The problem-set is known for decades. People have been working on paradigms, and been building tools for decades. We came to the point were we can build parallel software now. But it is not easy. It requires more input from the individual programmer than just using a new programming language, or calling a simple library call.

because their software wasn't optimized for it at the time
Skyrim was optimized recently. By recompiling with a different flag. It was that easy. But writing parallel software is not as simple as "just optimizing". It requires a different was of programming. A different way of thinking. It will come, but not quickly. And even then, some problems can not easily be solved with parallel programming. Example again: you can make 9 women pregnant, but you won't get a baby in 1 month.

You get a beefy piece of hardware and then developers slowly make software to make it better and better.
No. You have somebody with a problem. Then a software guy comes up with an algorithm to solve it. Then it gets implemented (programmed). Then you run it on a piece of hardware you have laying around. Programmers write their programs in a certain programming language. They don't care what hardware is underneath it.

There was no need to make OS's capable of 4GB+ until having more than 4GB in one machine became possible and is now the norm.
Completely irrelevant. Moving an OS from 32 to 64 bit is simple. We've done it before when moving OSes from 16 bit to 32 bit. Basically it's just changing some values in headerfiles, and recompiling. There are no new problems to be solved. There is no challenge. It's the same as counting from 200 to 300, when you already knew how to count to 200.

You wouldn't make a video game that requires a 16 core processor unless such a processor existed or you'd have no market.
There is no fundamental difference between programming for 2 cores, or 16 core. Or 10000 cores. The same problems need to be solved.

I can see some developers creating something merely included in their software with the mindset that technology will become available to use it
You are thinking way too much from a practical point of view. People who invent new stuff don't care about practical. First they wanna solve these new and challenging problems. How useful it is, how applicable it it, is less important. And you seem to think that nothing is build or invented unless you can sell it to a mass market. Or make a lot of money. That's too shortsighted. Some people do stuff for the long run. Some people do stuff for fun.

We already see games using 4 threads, that number will only increase and perhaps become the norm. I expect to see at least a handful of games in the next 5 years to be capable of using 8 threads.
I am not disagreeing with you, that it will never happen. I'm just saying: it will take a while. Longer than you might expect. Because it's not trivial. It's trivial to move to 64-bit executable. Use more and larger textures in games, forcing the developers to go to 64-bit. The reason they don't do that now is practical reason (only support 1 exe, don't leave people with less than 4GB behind). But if they decide to do it, it's easy. Writing properly parallel software is not easy. You need to redesign your engine from scratch. Look at Bethesda, they still keep building on top of the same engine for a decade now. If they still use the Creation/Gamebryo/Netimmerse engine with ES6, chances are we will not get more than 4 cores in 216 with ES6.

Still, it doesn't really prove anything. You're just saying that it's difficult to do, not impossible.
Exactly. It's my opinion. I can't prove anything.

Similar to IPv6. I've heard many people yelling for 10 years now that IPv6 is "Real Soon Now?". So far we got zilch. IPv6 is not closer to reality than it was in 1999. In fact, it looks further away, tbh. Many people have declared I am nuts and clueless. But so far, I was right. Maybe IPv6 will break through in the next 2-3 years. But I am not holding my breadth. (And the difference with IPv6 is: we already have the technology, we know how to do it. Parallel programming is harder than implementing IPv6, imho).

The problem we're facing right now is figuring out how to split those functionalities or how to design them in such a way where they may be easily split. ... It's only a matter of "when".
Exactly. And we have no general way to do that. And we have no proper tools to help us. Therefor it depends on the creativity of the individual programmer (and/or software architect). If we had tools to automate it, it would happen 1000x faster.

My new i5-3570K will be good for games in the next 4 years. What happens after 4 years, I will see then.
User avatar
Emilie M
 
Posts: 3419
Joined: Fri Mar 16, 2007 9:08 am

Post » Sat May 05, 2012 6:00 am

It seems we disagree on the fundamental level of what drives progress. I agree that a lot of development goes into solving problems, but I think you're forgetting that there's a lot of money to be had in making things better. Also, having the ability to use 8 threads at once and not being able to do so is a problem of sorts (the not efficient as you could be variety).
I disagree.
Decades of research is not "dikeing around on the side". Research work in networking in the seventies, is what gave us the Internet 20 years later. Work in distributed systems in the eighties is giving us lots of new technology we see deployed now. And the fundamentals were laid even earlier. Like I said, Dijkstra came up with a lot of fundamental algorithms in the fifties.

And IMHO, pouring money into research guarantees you nothing. Zilch. To build something new, you first need to have an idea how to solve the problem. An algorithm. Certainly when talking about parallel computing. You can't force algorithms with just hiring more people. You just need one, or a few people, to come up with a bright and new idea. You can't force that. It is my belief that 1 real smart guy can come up with more new ideas than a 1000 mediocre people. Even in software development, 1 good programmer is worth more than 10 or 20 mediocre ones.

Pouring money helps maybe when you have to implement something. When you already know what you want, and know how to do it. Then it can help to have more people iron out the little details. But with something fundamental as coming up with new paradigms or tools to do better parallel programming, having lots of people working on it won't help much.

Note that building new CPUs is not the same as writing software. Building chips requires a lot of work. And everything has to be tested to the max. With software you can always give out patches with bugfixes later. That won't work for hardware. I'll give you another example: 10+ years ago I was working for a startup. Building a new box with high-tech chips in it. And software running on a control plane. The big edge of the company was that we had a few software guys who knew how to build their software, where it was rumoured very few people knew how to do that. We had, say, 10 of those guys, and another 10 software developers doing supporting stuff. And then we had 200+ engineers developing 5 new chips ! And a machineroom full with expensive boxes to do the verification for the chips. The big difference for the company was those 10 software guys. But 95% of the development budget was poured into hardware development. That's the way it is: hardware requires a lot of people and a lot of time. But the true breaktroughs are done in the software.
There is a huge difference between developing for existing technologies that can make immediate impacts in the market and, as I said, just dikeing around because you have extra money in your R&D department rarely results in cutting edge technologies, especially when there isn't hardware to really test with it. Are you telling me they easily created multithreaded processors 60 years ago and only had trouble making software utilized them? I strongly doubt that, the push was both hardware and software and neither made much progress without the other. The technology simply wasn't there back then. We're only just now getting to a time period where a thought can become reality in just a few years.

Completely irrelevant.
We've had multi-processor systems since decades. You need to program those with parallel programming. The problem-set is known for decades. People have been working on paradigms, and been building tools for decades. We came to the point were we can build parallel software now. But it is not easy. It requires more input from the individual programmer than just using a new programming language, or calling a simple library call.
It's only going to get easier. Once again, we haven't had anything available to the masses. Again, available to lab technicians does not equate available to customers. Besides, we've have software that uses multiple processors for awhile, as you've mentioned. I'm talking about the progression from a few cores to these much larger much more complex corese.

Skyrim was optimized recently. By recompiling with a different flag. It was that easy. But writing parallel software is not as simple as "just optimizing". It requires a different was of programming. A different way of thinking. It will come, but not quickly. And even then, some problems can not easily be solved with parallel programming. Example again: you can make 9 women pregnant, but you won't get a baby in 1 month.
No, that statement was to say that the OS's available in 2002 were typically things like server 2000 which were not optimized for multi-threading. So most people didn't really use multi-threading or necessarily buy the software because it couldn't be used. This was simply the wrong place at the wrong time. But it still pushed future OS' to allow multi-threading, so it did push it some.

No. You have somebody with a problem. Then a software guy comes up with an algorithm to solve it. Then it gets implemented (programmed). Then you run it on a piece of hardware you have laying around. Programmers write their programs in a certain programming language. They don't care what hardware is underneath it.
This simply isn't always the case, people create hardware that's capable of more and developers create software to meet or exceed those new limits. We literally see this all the time. Give me some examples of software designed before technology capable of using it exists. I've seen them both being made at the same time, but I can't recall any example of software existing before.

Look at the ps3. It merely existing posed "problems" that softwares then spent the last 6 years creating software to circumvent. Brilliant software, I might add, stuff that has unlocked a lot of its potential, but all just to manage "new" technology and to get hte most out of it. You've got it backwards.

Completely irrelevant. Moving an OS from 32 to 64 bit is simple. We've done it before when moving OSes from 16 bit to 32 bit. Basically it's just changing some values in headerfiles, and recompiling. There are no new problems to be solved. There is no challenge. It's the same as counting from 200 to 300, when you already knew how to count to 200.
Completely relevant, OS capable of utilizing 32GB of RAM are not necessary unless RAM hardware gets to a viable point where there is a demand for it. Ram has gotten so fast and powerful that software has been changed to allow it. The "problem" before that was that no more than 3GB could be used for a given application. Problem solved, but only after hardware was developed that made the limitation into a problem.

There is no fundamental difference between programming for 2 cores, or 16 core. Or 10000 cores. The same problems need to be solved.
Um, yeah. Didn't say otherwise. The point is that you wouldn't make a piece of software with components that can be split up into 16+ threads if no technology existed to do so.

You are thinking way too much from a practical point of view. People who invent new stuff don't care about practical. First they wanna solve these new and challenging problems. How useful it is, how applicable it it, is less important. And you seem to think that nothing is build or invented unless you can sell it to a mass market. Or make a lot of money. That's too shortsighted. Some people do stuff for the long run. Some people do stuff for fun.
You are thinking way too much from an impractical point of view. People who hire people to invent stuff don't care about the impractical. First they want to see how they can make money in a cost effective manner. How expensive it is, how many people might buy it, what the profit is. And you seem to think that most things are built out of the goodness and curiousty of the human spirit. That's simply naive. Some people do stuff because they're curious. But most people do stuff because they're paid to do it or can make money from it.

I hope you don't mind my tongue in cheek copy/response. You are exhibiting an idealist view of the world. It is naivety in its best form. If the rest of the world thought the same way that you do then we would all be the better for it. But you are ultimately wrong. Man is a greedy, evil beast. We have our strong points, our moments of valor and grace but it ultimately comes down to what we get out of something. I try to do things out of compassion but at some intellectual level I must accept that I value helping people and the satisfied feelings that brings just like someone would value wealth. In that way, even noble actions become greed. Perhaps you and I are on opposite sides of the spectrum looking in, but I can tell you beyond a shadow of a doubt that money has and will continue to move mountains. Any individual strides we make as a society will be from idealists like yourself before the corporate mindset gets ahold of them.

I am not disagreeing with you, that it will never happen. I'm just saying: it will take a while. Longer than you might expect. Because it's not trivial. It's trivial to move to 64-bit executable. Use more and larger textures in games, forcing the developers to go to 64-bit. The reason they don't do that now is practical reason (only support 1 exe, don't leave people with less than 4GB behind). But if they decide to do it, it's easy. Writing properly parallel software is not easy. You need to redesign your engine from scratch. Look at Bethesda, they still keep building on top of the same engine for a decade now. If they still use the Creation/Gamebryo/Netimmerse engine with ES6, chances are we will not get more than 4 cores in 216 with ES6.
It's ultimately a toss up. If consoles enable hyperthreading then I'd expect to see it in just a few years. 4 years may be the tipping point. If not, then I'd expect a handful of games taking advantage of it over the next decade. Either way, $100 isn't going to amount to a hill of beans for many people.


Exactly. It's my opinion. I can't prove anything.

Similar to IPv6. I've heard many people yelling for 10 years now that IPv6 is "Real Soon Now?". So far we got zilch. IPv6 is not closer to reality than it was in 1999. In fact, it looks further away, tbh. Many people have declared I am nuts and clueless. But so far, I was right. Maybe IPv6 will break through in the next 2-3 years. But I am not holding my breadth. (And the difference with IPv6 is: we already have the technology, we know how to do it. Parallel programming is harder than implementing IPv6, imho).
You're incorrect. We already have and are using IPv6. It just requires user adoption and people are pretty dumb when it comes to understand and implementing new technologies. Public acceptance has nothing to do with this conversation.

We are talking about creating technology that will significantly improve the quality/speed of software. Not something that requires wide acceptance to eventually become useful like IPv6. But I'm using it on my departments VMs right now.

Exactly. And we have no general way to do that. And we have no proper tools to help us. Therefor it depends on the creativity of the individual programmer (and/or software architect). If we had tools to automate it, it would happen 1000x faster.

My new i5-3570K will be good for games in the next 4 years. What happens after 4 years, I will see then.
Which is why automation tools will be developed. I expect Intel and many software companies should be hard at work about it right now. Once something is released, it should take a bit but we'll see stuff soon along with the handful of firms that will slowly make it manually.
User avatar
Loane
 
Posts: 3411
Joined: Wed Apr 04, 2007 6:35 am

Previous

Return to Othor Games