ATTN: Bethesda! Future Tech May Unlock TES VI Unlimited Pote

Post » Mon May 14, 2012 6:32 am

Easy. You create a standard point-based skeleton, and you assign high-level points in the voxel tree to the bones in the same way you would assign vertices, so that the child voxels inherit the positional data from their parent. Simple?

Obviously not, since it hasn't been done and most people in the industry don't believe it can be at this point.
User avatar
Lew.p
 
Posts: 3430
Joined: Thu Jun 07, 2007 5:31 pm

Post » Mon May 14, 2012 1:10 am

Easy. You create a standard point-based skeleton, and you assign high-level points in the voxel tree to the bones in the same way you would assign vertices, so that the child voxels inherit the positional data from their parent. Simple?

But can you really simply inherit parent positioning when you apply volume deformations onto the cloud? Like animating a bicep buldge?
Or when you do interpolation between different keyed animations?
User avatar
Dominic Vaughan
 
Posts: 3531
Joined: Mon May 14, 2007 1:47 pm

Post » Mon May 14, 2012 11:43 am

... you'll see the John Carmacks of the world begin to write code to disassemble all the particles and apply motion affects to them, meaning a monster in a game could get its hand blown off organically and all the pieces could fly into the camera or the wall ... You could make a pool of liquid rise up and become the cloak-shrouded necromancer you've been hunting just after stepping in the puddle yourself ... the transformation could happen in realtime right before your eyes, every particle transforming from liquid into the solid form of the wizard. In the next TES game, you could hack off whatever body part you manage to aim at, and would see the flesh beneath, the blood, the cracked bone protruding ... Walls could be deformed using physics, and all the material inside the wall could come out, it wouldn't just be a two-sided low-polygonal piece of junk, it would be substantial, made of something, and its dust would fill the air. Since the entire world is made of particles, you could do mathematical procedural animations over the entire world at any time that would not essentially be shaders painted over the objects, but the actual objects being deformed and manipulated on a pixel-per-pixel basis. ....

Important bits bolded. That makes it WORSE.

I'll show you. Click on this link (http://www.newgrounds.com/portal/view/342901), mess around with the program, and see how long it takes before your computer starts to lag. Hint: Get a nice pile using the wall pixels to collect the "sand" and click around the pile it with the erasier. What's this? That, my friends, is realtime particle calculations. The more you have, the more information the computer needs to TRACK THE PARTICLES. The computer slowdown is a result of the computer needing to take away from other processes to keep the damn thing running! Note, this is only a 2d program, and is *very* limited in what's shown. If that engine is modifying everthing on a per-pixel basis, then either the guy is lying, and had beefed up the computer that was used in the videos and such, or the engine itself is effectively magic.
User avatar
Yvonne
 
Posts: 3577
Joined: Sat Sep 23, 2006 3:05 am

Post » Mon May 14, 2012 1:34 pm

I remember the strengths specifically applied when given static data, which can be stored on external media. "When the scene doesn't change" But if you applied transformations to the voxel cloud "aka animation"; that has to go into memory somehow yes? transformations for millions~billions of voxels with data into memory can't be cheap :laugh:

The flaw in your logic, which is totally understandable given the vastly different way in which this system works as opposed to all its predecessors, is that this system doesn't apply transformations to billions of voxels, because they are never in memory. They are still on the hard drive. Their engine takes a 3D mesh inside 3D Max, for example, then writes a name for every side of the object, and then further writes another name for say (just an example) six more angles of that one side. Now every object in the scene has a complex name. When the game engine loads an object, it doesn't load the object at all, it merely loads the name of the object and uses a placeholder for the name. Now imagine your world was not filled with objects, just flat planes with names on them. The name is the reference name to the actual object on the disc but in your renderer, all the game would see is a cardboard cut out with the name on it. When the renderer says the object is being looked at by the camera (ie, your character), it determines if the object is behind other objects, fully occluded, and therefore does not need to ever turn it on since it can't be seen.) The objects in the foreground are turned on, but only the parts that are visible to your character (your monitor's view). Now all of those pixels on the screen (using the referenced cardboard cut outs) are then looked at by the engine one pixel at a time, just one, and it never loads the mesh or the texture into memory. The engine is smart enough to determine the pixel's color without ever loading them. It references (like Google) the pixel it's looking for, combines just those pixels in its rendering eye, and produces the color, and the pixel is displayed on your screen where that object would be.

More simply: it's like this: Imagine you had a dragon model. And on every portion of the dragon you wrote A, B, C, D, E, F, G, H, I, J, K .... until you have covered all the sides that can be seen with referece names. Then on each of the 16 possible angles you had further divisions (A1 thru A-8 etc) ... now the dragon is on your screen, you are looking at it. But its facing you. So only A, D, E, and G would be visible, let's say. The engine does a Google-style search for all the indexed names we'd be looking at (that are facing the camera) and turns them on. Then it determines which ones are behind the others (fully occluded), and turns them back off. Whatever is left is all the screen can see. No other object in the world behind the dragon, the mountains, the trees, none of it, can be seen, so they are all turned off where the dragon would be seen, because the dragon's pixels overlap those parts of the mountains and trees... So now that the screen is fully determined to contain A1-A7, D2-D6, E5-E6, G1, G3, and G6) ... all only a few parts of the entire screen) ... it can now begin looking at the pixels. It matches the address of the mesh with the relevant addresses of the textures ... and combines them as the rendering engine would do.... to produce the pixel at whatever color it would have. It uses this Google-style referencing system (which is very fast fast, much faster than loading models and textures into memory and manipulating them endlessly and then trying to scale down the data on screen) to search for ONLY THE PIXELS IT NEEDS times the FPS per second.... to obtain just what is seen and nothing more ... leaving all other data on the disc untouched if not needed. Thus, the entire game runs off your hard-drive with very little need for the GPU ...

That's what I gather is the secret of this tech from all that I've read and studied about it ...
User avatar
Janine Rose
 
Posts: 3428
Joined: Wed Feb 14, 2007 6:59 pm

Post » Mon May 14, 2012 9:45 am

People call it a scam only because Notch, the creator of Minecraft said it was, though he had no proof of it. Just blind followers.
User avatar
Nicole Kraus
 
Posts: 3432
Joined: Sat Apr 14, 2007 11:34 pm

Post » Mon May 14, 2012 10:47 am

But can you really simply inherit parent positioning when you apply volume deformations onto the cloud? Like animating a bicep buldge?
Or when you do interpolation between different keyed animations?

I'm just giving the simplest example. I'm sure the guys better versed in the mathematics of animation can find more elegant solutions. I've seen demo videos of octree technology that had animated creatures in it so it's not an impossibility.



Obviously not, since it hasn't been done and most people in the industry don't believe it can be at this point.

Define most. Octree tech has existed for years, theres plenty of tech demos that use it that exist, it was simply ahead of what computers could handle until recently. ID tech 6 (A reseach project for a future ID software game engine) is octree based. The tech only recently hit the headlines because that Australian company came out and said "HEY GUYS LOOK AT THIS TECH DEMO" and Notch said "total scam!"... and, from my digging around, he was pretty much the only guy who objected
User avatar
Ernesto Salinas
 
Posts: 3399
Joined: Sat Nov 03, 2007 2:19 pm

Post » Mon May 14, 2012 11:51 am

Assuming Bethesda is working on Elder Scrolls VI already, it's too late since this technology is not even ready yet. It will be ready for Elder Scrolls VII.
User avatar
Louise Dennis
 
Posts: 3489
Joined: Fri Mar 02, 2007 9:23 pm

Post » Mon May 14, 2012 9:29 am

Important bits bolded. That makes it WORSE.

I'll show you. Click on this link (http://www.newgrounds.com/portal/view/342901), mess around with the program, and see how long it takes before your computer starts to lag. Hint: Get a nice pile using the wall pixels to collect the "sand" and click around the pile it with the erasier. What's this? That, my friends, is realtime particle calculations. The more you have, the more information the computer needs to TRACK THE PARTICLES. The computer slowdown is a result of the computer needing to take away from other processes to keep the damn thing running! Note, this is only a 2d program, and is *very* limited in what's shown. If that engine is modifying everthing on a per-pixel basis, then either the guy is lying, and had beefed up the computer that was used in the videos and such, or the engine itself is effectively magic.

In the current system, I agree with you. If you had 1 MILLION particles onscreen, they would all be fully loaded, and require processing by the system. But now, in the proposed new system, there is never any particles loaded. You don't have to track them all. You only track the ones that the engine can see (and thus display as a pixel on the screen). So if your screen resolution was 1024 x 768, then the total number of pixels you could even see on your screen would be 786,432, not even a million. But under the current systems, you'd still have to track all 1 Million particles, and as you add more, your system begins to slow down down down... but the new system says that only the pixels required are turned on. The particles don't even exist unless they can be seen. So let's say you have that resolution above, and you are looking at 1 million particles on screen, but those particles are in the center of the screen and its zoomed out so that the particles are only filling 20% of the screen. The total number of particles you could even see with this resolution would be 157,286 only a small portion of the total 1 million particles being generated. If you turned up your resolution to HD (1920 x 1080) then you'd be able to see 2,0736,600 pixels on screen, of which the same 20% (due to being zoomed back from the mass of swirling particles) would still be only 414,720 rendered pixels. The rest, in this new system, are turned off. They are not being tracked by the GPU or memory. Thus we can see that the only limit to your visual CPU performance is your screen resolution.

In all current games, screen resolution does not hardly matter at all, since the graphics saved to the disc are usually of high enough fidelity that if you played in 1920 x 1080, you'd see no difference by going down to lower, or you'd see no difference by going from a lower to a higher, unless you install mod texture packs like those in play now for Skyrim. But with the new proposed system, the resolution actually matters, since the total cap to this engine's way of working is whatever number of pixels it must search every second. If the resolution is higher, you are going to get more and more fidelity and actually notice the difference considerably between each resolution.

In order to even approximate the same torture to this new system that the current GPU's are dealing with, if you rendered the million particles in 1920 x 1080 and then zoomed the particles so that they filled more than half of your screen, then you'd have to track every particle, but then again, all of the other pixels behind each particle would be turned off, and not have to be loaded or rendered or even thought about. There virtually is no limit to what can be done with such a system.
User avatar
Mariana
 
Posts: 3426
Joined: Mon Jun 12, 2006 9:39 pm

Post » Mon May 14, 2012 2:03 am

This unlimited detail scam again? :shakehead:
its not really a scam its just most of it isnt true, it is basically like a very advanced voxel engine, so you wont be able to have INFINITE DETAILS
User avatar
Rachel Cafferty
 
Posts: 3442
Joined: Thu Jun 22, 2006 1:48 am

Post » Mon May 14, 2012 5:48 am

This sounds really cool, there would be a lot of new possibilities and maybe we will finally stop playing the same game for almost a decade now. and no 3d is not something new.
User avatar
Ricky Meehan
 
Posts: 3364
Joined: Wed Jun 27, 2007 5:42 pm

Post » Mon May 14, 2012 5:03 pm

Assuming Bethesda is working on Elder Scrolls VI already, it's too late since this technology is not even ready yet. It will be ready for Elder Scrolls VII.
i think they might be working on another fallout, or DLC for skyrim
User avatar
Sasha Brown
 
Posts: 3426
Joined: Sat Jan 20, 2007 4:46 pm

Post » Mon May 14, 2012 10:52 am

The flaw in your logic, which is totally understandable given the vastly different way in which this system works as opposed to all its predecessors, is that this system doesn't apply transformations to billions of voxels, because they are never in memory. They are still on the hard drive. Their engine takes a 3D mesh inside 3D Max, for example, then writes a name for every side of the object, and then further writes another name for say (just an example) six more angles of that one side. Now every object in the scene has a complex name. When the game engine loads an object, it doesn't load the object at all, it merely loads the name of the object and uses a placeholder for the name. Now imagine your world was not filled with objects, just flat planes with names on them. The name is the reference name to the actual object on the disc but in your renderer, all the game would see is a cardboard cut out with the name on it. When the renderer says the object is being looked at by the camera (ie, your character), it determines if the object is behind other objects, fully occluded, and therefore does not need to ever turn it on since it can't be seen.) The objects in the foreground are turned on, but only the parts that are visible to your character (your monitor's view). Now all of those pixels on the screen (using the referenced cardboard cut outs) are then looked at by the engine one pixel at a time, just one, and it never loads the mesh or the texture into memory. The engine is smart enough to determine the pixel's color without ever loading them. It references (like Google) the pixel it's looking for, combines just those pixels in its rendering eye, and produces the color, and the pixel is displayed on your screen where that object would be.

More simply: it's like this: Imagine you had a dragon model. And on every portion of the dragon you wrote A, B, C, D, E, F, G, H, I, J, K .... until you have covered all the sides that can be seen with referece names. Then on each of the 16 possible angles you had further divisions (A1 thru A-8 etc) ... now the dragon is on your screen, you are looking at it. But its facing you. So only A, D, E, and G would be visible, let's say. The engine does a Google-style search for all the indexed names we'd be looking at (that are facing the camera) and turns them on. Then it determines which ones are behind the others (fully occluded), and turns them back off. Whatever is left is all the screen can see. No other object in the world behind the dragon, the mountains, the trees, none of it, can be seen, so they are all turned off where the dragon would be seen, because the dragon's pixels overlap those parts of the mountains and trees... So now that the screen is fully determined to contain A1-A7, D2-D6, E5-E6, G1, G3, and G6) ... all only a few parts of the entire screen) ... it can now begin looking at the pixels. It matches the address of the mesh with the relevant addresses of the textures ... and combines them as the rendering engine would do.... to produce the pixel at whatever color it would have. It uses this Google-style referencing system (which is very fast fast, much faster than loading models and textures into memory and manipulating them endlessly and then trying to scale down the data on screen) to search for ONLY THE PIXELS IT NEEDS times the FPS per second.... to obtain just what is seen and nothing more ... leaving all other data on the disc untouched if not needed. Thus, the entire game runs off your hard-drive with very little need for the GPU ...

That's what I gather is the secret of this tech from all that I've read and studied about it ...

Ok, before I begin, I have zero knowledge of any of this kind of stuff.

But, from what you have said here, in my limited understanding, is, that when said dragon is in front of you, it fills the entire screen? How would that work? No other objects behind the dragon can be seen? This makes no sense to me, unless you mean directly behind, and not either side (but still behind (fov))? As we have it now, everything in you FOV is rendered, right? So when the dragon moves, everything behind it is already there, and not needing extra processing? With what you are saying, is that when the dragon does move, this system then has to on-the-spot access the 'name' of every object behind it and then render all of those objects?

Again, as I understand it, now, with Skyrim, when a dragon lands in front of you, that dragon is rendered seperately to everything around it? And only through some form of collision detection does it actually 'interact' with it's surroundings.

Any of this correct? Please enlighten me.
User avatar
Josh Dagreat
 
Posts: 3438
Joined: Fri Oct 19, 2007 3:07 am

Post » Mon May 14, 2012 3:12 pm

This new system doesn't lag with billions of points because it only renders the display resolution. So a 1920x1080 screen would only render 2,073,600 points at all times.
User avatar
Laura Cartwright
 
Posts: 3483
Joined: Mon Sep 25, 2006 6:12 pm

Post » Mon May 14, 2012 6:50 am

All Bethesda needs to do is use TODAY'S technology. The first game to use the Gamebryo engine was in 2003. You can't tell me (us) there isn't a newer, better engine out there.

Uldred
User avatar
Isaiah Burdeau
 
Posts: 3431
Joined: Mon Nov 26, 2007 9:58 am

Post » Mon May 14, 2012 4:40 pm

I must agree, this looks very promising.
User avatar
Sarah Kim
 
Posts: 3407
Joined: Tue Aug 29, 2006 2:24 pm

Post » Mon May 14, 2012 2:57 pm

I'm an Environmental Artist and truthfully I don't see how this could be possible. Yes as technology moves on a vast amount of more detail can be added in games (For example the amount of detail you see in games today is nothing compared to what we can create in 3dsMax or Maya) but to have "unlimited detail" on even the most ancient of hardware? I cannot believe that without hard evidence, it simply defies logic in technology that we know.

If this new tech is valid (we have had "fake" advertisemants such as this in the past) then I suppose I'll need to learn to use something else. Arghh.
User avatar
leni
 
Posts: 3461
Joined: Tue Jul 17, 2007 3:58 pm

Post » Mon May 14, 2012 6:58 am

For me, the graphics are secondary. give me decent physics, decent game mechanics, detailed characters and conversations, and a detailed game world. That's immersion.

Graphics generally detract from the storage space available to the above.
User avatar
Adam
 
Posts: 3446
Joined: Sat Jun 02, 2007 2:56 pm

Post » Mon May 14, 2012 7:02 am

Pretty sure all this thing can do is render detailed environments. Impressive, but not ideal for gaming. From what I've heard it's extremely limited when it comes to physics, animation, etc.
User avatar
Andres Lechuga
 
Posts: 3406
Joined: Sun Aug 12, 2007 8:47 pm

Post » Mon May 14, 2012 7:56 am

I'm an Environmental Artist and truthfully I don't see how this could be possible. Yes as technology moves on a vast amount of more detail can be added in games (For example the amount of detail you see in games today is nothing compared to what we can create in 3dsMax or Maya) but to have "unlimited detail" on even the most ancient of hardware? I cannot believe that without hard evidence, it simply defies logic in technology that we know.

If this new tech is valid (we have had "fake" advertisemants such as this in the past) then I suppose I'll need to learn to use something else. Arghh.

yet its possible that hardware from 2006 that ran oblivion, will still run skyrim today with consoles.....its all about finding new ways to optimise things....
User avatar
Jessie
 
Posts: 3343
Joined: Sat Oct 14, 2006 2:54 am

Post » Mon May 14, 2012 3:08 am

yet its possible that hardware from 2006 that ran oblivion, will still run skyrim today with consoles.....its all about finding new ways to optimise things....

They are talking about a much larger gap here, with their claimed low CPU usage and atm no GPU using you can go back to the days of P4 and get perfect graphics apparently. As for your console comment, they are optimized because they have one sole function, to play your games, and as their lifespan comes to a close little tricks are used to get more graphical power out of them. We turn down your FOV, we limit your FPS, low-res textures are used for distant objects, ect.

As well as developers are limited by console hardware, we HAVE to make it work on those consoles using outdated 7800Nivida cards. I feel sorry for the people whose titles are "Low-Poly Artist" You can only do so much with tech before you hit a brick wall, sacrifices have to be made to give you better graphics on old tech.
User avatar
koumba
 
Posts: 3394
Joined: Thu Mar 22, 2007 8:39 pm

Post » Mon May 14, 2012 10:40 am

If its ray tracing into a sparse voxel octree then its nothing new or amazing. John carmack who is arguably one of the best graphic programmers on the planet, mentioned this possibly becoming the next big thing a long time ago before these guys started doing this. I mean is anyone dumb enough to think all these leading game programmers are not smart enough to think of some new ways of rendering besides the age old polygon rasterization techniques? Someday we will not be using the 20 year old rasterization anymore its just a fact so yeah eventually something new maybe this new tech will happen.

John Carmack has no credibility. He sold it all to M$ and $ony.

He is no longer a respectable authority on video games or 3D tech.
User avatar
Life long Observer
 
Posts: 3476
Joined: Fri Sep 08, 2006 7:07 pm

Post » Mon May 14, 2012 8:40 am

All Bethesda needs to do is use TODAY'S technology. The first game to use the Gamebryo engine was in 2003. You can't tell me (us) there isn't a newer, better engine out there.

Uldred

Today's technology doesn't work on the 360 though. Just look at CDPR. It's almost a year later and they still can't get The Witcher 2 to run on a 360, even though it's playable in 1080P on PC hardware from 4 years ago.

And no, Unreal Engine 3 is not "today's technology" nor is IDTech 5. Neither is whatever outdated trash Call of Duty uses.

And no in-house engine used for an exclusive counts as modern technology. Sorry Uncharted(3).
User avatar
Charlotte Henderson
 
Posts: 3337
Joined: Wed Oct 11, 2006 12:37 pm

Post » Mon May 14, 2012 1:03 pm

The first game to use the Gamebryo engine was in 2003. You can't tell me (us) there isn't a newer, better engine out there.

Uldred

Wrong, since 2001.
User avatar
Guinevere Wood
 
Posts: 3368
Joined: Mon Dec 04, 2006 3:06 pm

Post » Mon May 14, 2012 11:47 am

Wrong, since 2001.

Thanks for the correction (i had only done a quick search), which actually just makes it even worse. :-)

Well, they could stop making games designed for the 360, but that won't happen. Early word on the next-gen Xbox isn't confidence-inducing when it comes to pushing the technology much either. The majority of PC games will be forever held back by them. Thank you Microsoft.

Uldred
User avatar
Rhi Edwards
 
Posts: 3453
Joined: Fri Jul 28, 2006 1:42 am

Post » Mon May 14, 2012 3:29 pm

If its ray tracing into a sparse voxel octree then its nothing new or amazing. John carmack who is arguably one of the best graphic programmers on the planet, mentioned this possibly becoming the next big thing a long time ago before these guys started doing this. I mean is anyone dumb enough to think all these leading game programmers are not smart enough to think of some new ways of rendering besides the age old polygon rasterization techniques? Someday we will not be using the 20 year old rasterization anymore its just a fact so yeah eventually something new maybe this new tech will happen.
I can't judge the validity of this engine, but regarding this point: They are hardly "dumb enough". But they are part of the industry, of a specific company that wants to make profit. Trying something revolutionizing is risky, especially on this scale, so it's unlikely to be funded when the alternative is to create a game with "standard" technology and make loads of money anyway. It's much easier for an outsider to dare make something completely new.
User avatar
luke trodden
 
Posts: 3445
Joined: Sun Jun 24, 2007 12:48 am

PreviousNext

Return to Othor Games