Jump to content
kikz

render using atoms insteadof polygons, in games

Recommended Posts

Hey I just got pointed to this at work, Euclideon's 'atom' rending technology & research.

 

Sounds pretty brilliant but is it just vapourware/hoax/too ambitious?

 

They're claiming they can take any polygon based game and convert it to the atom style, and 1 cubic millimeter contains 15 'atoms', resulting in astounding detail.

Will this be something the current technology in GP's can handle? How does it effect DirectX?

 

Share this post


Link to post
Share on other sites

Hmmmm.......

kikz, you're a developer, so you would have a good idea about what I'm talking about in terms of systems limitations.

 

There used to be an old (OLD) system of rendering everything from balls, but it looked shit. Mind you, there was maybe 20 balls per model of a person...

 

What they are talking about here, is rendering using atoms. It's used in science, the problem is, in science you try and make calculations using atoms, and when it's more than a few hundred thousand, your linux super-computer starts to chug. Depending on the calculation, of course. And that's probably pretty trivial compared to what they are doing.

 

For this to work, they need to be working some serious voodoo.

 

They'll choke out RAM on modern machines, which means their algorithms to pull data on what you want to look at next into RAM and into the GPU need to be crazy good. If you go from looking at eye level, to examining dirt in a fraction of a second, it needs to unload the data on all the objects you were looking at (each made up of millions of 'atoms') and load the data on the grains of dirt, leaves etc (made up of millions of atoms?).

 

That's pretty hefty to start with. Then there are other problems. Their demo doesn't show dynamic lighting, nor particle effects. Which to me, is a big worry.

It's actually pretty easy (as in, not FREAKING IMPOSSIBLE) to do what they display, if the light mapping doesn't need to change.

 

My guess is, they use some pretty hefty post-production algorithms to chomp down the "actual atoms" into "effective atoms".

My guess is also that dynamic lighting and particle effects are not only "not included" but will be "really hard to implement" in this system.

The fact that terrain runs on 20fps in software mode (presumably on a super-computer) is also a worry. It tends (to me) to suggest this is in the realm of ray tracing, rather than next-gen graphics.

 

The video was so light on, in terms of their actual acheivements that it's crazy.

 

TBH, I think this is either a severly limited system, or a total hoax. What they are suggesting, is akin to compression that allows HD video over dialup.

Their basic premise is that their system is SOOO efficient at rendering polygons, that they can afford to generate everything with atoms, generating an amount of effective polygons hundreds of thousands of times more dense than current systems.

 

That means their compression of this data, has to be a hundred thousand times better.

The speed to render and fill and post-process has to be a hundred thousand times better.

Their AA algorithms have to be a hundred thousand times better.

Their data preloading has to be a hundred thousand times better.

 

Let's think of a good example.

 

If I told you I had so much fucking gold, that I just used it unsparingly, and had a solid gold set of golf clubs, a solid gold golf buggy, and a solid gold house, what would your question be?

Not "how did you get the golf buggy to work", it would be "where did you get the gold from!".

 

Likewise, the question we ask with this video, isn't "do atoms work as a replacement for the polygon"... they could, compute limitations aside.

The question is "how do you get a hundred thousand fold increase in your polygon pushing".

 

After all, AFAIK polygons are nearly hard-coded into current cards. They deal with vertices, etc, and unless they are running as GPGPUs to generate this data, the atoms themselves are rendered in the pipeline as polygons.

 

Short answer: I'd love it to work, but it's (IMHO) too good to be true.

If this does get released, I'll still want to know how they did it.

This can't be magic, it will be a magic trick, and I'll always want to know what's behind the curtain.

 

What we don't know, is whether the magic trick only works on YouTube, or whether it works interactively when you play :)

Edited by TinBane

Share this post


Link to post
Share on other sites

this looks really good but i really dont see the point of using atoms (tiny balls) for doing models and environments and such. Yes it would be awesome due to level of detail possible but it wouldn't work well on modern machines.

Share this post


Link to post
Share on other sites

thanks TinBane. My first thought was that the video was too light with detail (and that perhaps this company had been mentioned on atomic before, and I missed it). My second thought was about the processing power required, and that it would put an enormous strain on our CPUs and not suit the architecture of the current GPU's.

 

I'll be the first to admit I don't keep up to date with graphics rending technologies, being a distributed computing 'specialist' :)

 

Will be an interesting one to revisit in 12 months though

Share this post


Link to post
Share on other sites

thanks TinBane. My first thought was that the video was too light with detail (and that perhaps this company had been mentioned on atomic before, and I missed it). My second thought was about the processing power required, and that it would put an enormous strain on our CPUs and not suit the architecture of the current GPU's.

 

I'll be the first to admit I don't keep up to date with graphics rending technologies, being a distributed computing 'specialist' :)

 

Will be an interesting one to revisit in 12 months though

Yeah, definitely.

 

In 12 months, they might totally prove me wrong.

But at the moment, it's a stretch to see how it's even possible.

Share this post


Link to post
Share on other sites

It's the same software.

 

Interesting to see some more concrete claims in there.

 

It looks like it's treating the scene more like a database, and rendering from a perspective by searching for the pixels that match from that POV.

 

There's problems with that:

The RAM needed to run the search process.

The fact the search has to be done Just In Time.

The fact that when you play a game, you expect not just POV, but also items in the environment to move.

 

Moving items other than the POV, in that model, would be massive changes to the database.

Having massive changes to the database eliminates the possibility of using a large number of database "cheats" to speed it up.

Edited by TinBane

Share this post


Link to post
Share on other sites

Their AA algorithms have to be a hundred thousand times better.

Technologies like MLAA and FXAA may well make this moot. They've got the potential to replace FSAA due to their speed and minimum resource usage. FXAA is basically as good as FSAA 4x as it is.

Share this post


Link to post
Share on other sites

True, that depends though.

MLAA and FXAA are post-render Anti-Aliasing.

 

EDIT: "Effectively" post-render Anti-Aliasing. I don't mean they occur necessarily outside the render pipeline, but that they are functions that apply to the image while it's in buffer. They aren't sub-pixel based processes that occur during the initial image render (AFAIK). Hence their lack of overhead.

 

I guess if post-render is all that's needed, then this won't increase the "cost" in time and processing power to anti-alias the image (it will just be a factor of resolution).

 

However, for true anti-aliasing, where pixels on borders rely on subpixel information, my point is correct. One of the key reasons less rigorous AA regimes are used, is because current games don't support the number of polygons required. If this technology works the say they say it does, then they will need subpixel AA to prevent Moire effects when the pixel details approach screen resolution where such details appear in things like the ground or foliage.

 

http://en.wikipedia.org/wiki/Moir%C3%A9_pattern

 

Given the detail, tricks like mipmapping and anisotropic filtering, which apply to textures, won't work. Unless similar methods are adopted to work on polygon details...

Which means subpixel filtering/anti-aliasing is required.

 

Anyway, it's entirely speculative. And whether AA requires a massive increase in processing power is kind of moot, given that everything else they claim does require an increase in resources.

Edited by TinBane

Share this post


Link to post
Share on other sites

Sounds like a lot of BS. I wonder how the youtube I linked compares with the technology of 12 months ago? I get the feeling this is Mr Dell trying drum up excitement (and money) for something that doesn't exist. Maybe looking at selling his IP/Company?

 

Still, RAM is cheap. If it could produce that sort of GFX and needs 32GB or 64GB of ram, that' still cheaper than a decent video card :)

 

Also, I should have checked "my" own website first lol http://www.ausgamers.com/news/read/3093969

 

The company has just (in may) received $2M of funding. Seems IRC had the details going around before the article went up.

Edited by kikz

Share this post


Link to post
Share on other sites

Could the tech be useful for more than just games? I'm thinking that you could combine this with procedural generation to automatically produce an ultra-detailed, realistic (or not) environment for the purpose of 3D animation. The artist can leave minor details to the computer while they concentrate on the parts of a scene that are relevant to the particular story.

Share this post


Link to post
Share on other sites

Could the tech be useful for more than just games? I'm thinking that you could combine this with procedural generation to automatically produce an ultra-detailed, realistic (or not) environment for the purpose of 3D animation. The artist can leave minor details to the computer while they concentrate on the parts of a scene that are relevant to the particular story.

Yes. the video says its currently being used in fields such as medical imagery, where a higher detail is required. (not their technology, but the idea ;))

Share this post


Link to post
Share on other sites

Once refined and implemented properly tessellation shall be the next biggest thing. Even if this was a real thing just implementing it and reworking the way Gpu's process data and the way a developer creates code, blah.. I kinda forgot what i was saying sorry. but basically i was trying to say, even if this gets well developed it will be like (im using cars here) a rotary engine. They are in theory and design 100x better and more efficient then a standard reciprocating engine. But the current design of engines has had billions of dollars and so much more development its just not realistic to implement a completely new design. And in practice a rotary engine is a POS anyway, Its inefficient in comparison to current engine designs that are near fully developed.

Share this post


Link to post
Share on other sites

It doesn't exist until they release a tech demo. You'd think that after all the time they spent making mock-ups for their marketing hype videos, they'd have some sort of tangible proof in the form of a distributable executable file.

 

Edit: They're at QLD 4172. Anyone want to track them down and get an interview? :P

Edited by .:Cyb3rGlitch:.

Share this post


Link to post
Share on other sites

Could the tech be useful for more than just games? I'm thinking that you could combine this with procedural generation to automatically produce an ultra-detailed, realistic (or not) environment for the purpose of 3D animation. The artist can leave minor details to the computer while they concentrate on the parts of a scene that are relevant to the particular story.

Yes. the video says its currently being used in fields such as medical imagery, where a higher detail is required. (not their technology, but the idea ;))

 

I really didn't understand those references to medical imaging etc.

Is it talking about how slices are turned into a 3d image?

It didn't make any sense to me, at all.

Share this post


Link to post
Share on other sites

dunno TinBane, maybe the article on the home page of this site has more info :p lol. I think I made my post first!

Share this post


Link to post
Share on other sites

Apparently they were doing an identical demo... 8 years ago:

 

http://imgur.com/g0gXt

Came here to post this.

 

However (and I see someone else posted it, which is good) that Carmack thinks the tech is great, it's just that we're many years away from having the tools to implement it in any meaningful way for consumers right now.

 

But all their 'tech demos' are (aside from the awesome voice over guy) missing some fairly critical things - physics, collision detection and lighting for starters. Worth keeping in mind that it took a long time for textured polygons to get a foothold in the industry too...

Share this post


Link to post
Share on other sites

I think it's great that there's a company out there willing to invest time, money and effort into doing something in an entirely new way. If we didn't have these type of innovative company's doing what they do we would be a lot further behind.

 

Having said that, for every 1 innovative and new technology to actually hit the market, there's about 1000 that failed.

 

Good luck and I hope they reach their goals. I don't think i'll be investing my own money in them though...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

×