Prev: FAKE CONFERENCE 2nd call - INFORMATICS 2010: submissions until 31 May 2010
Next: Micro Men on BBC4 today 22:30
From: Skybuck Flying on 26 May 2010 18:47 "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message news:htk745$3nm$1(a)swain.cs.ubc.ca... > Skybuck Flying wrote: >> "Nicolas Bonneel" <nbonneel(a)cs.ubc.ca> wrote in message >>> I don't say "everything is solved". I just say there *are* efficient >>> compression schemes - as in the links I sent, with octree based >>> compression. So maybe stills things have to be done to reach Shannon >>> limit, but if people *are* able to render billionS of voxels, it *means* >>> it is sufficiently compressed for interesting use with current hardware. >>> If it was *not* compressed, a 8192^3 voxelization would take 2TeraBytes >>> (with just 1 float opacity per voxel, without any color) which obviously >>> does not fit in the gpu memory. >> >> Perhaps, but perhaps the renderer is also "cheating" by using the CPU to >> do the decompression. > > Instead of saying "perhaps", read it!!!!!! No time for it, besides it contains way to many details. > If the data was decompressed on the CPU, there would be no way to send > that amount of data on the fly to the GPU. It could do it partially/little bits at a time but that would not be very usefull. Actually I think the documents mentioned "blocks" being decompressed on the cpu and being fully sent to the gpu. > They achieve at worse 20fps on this old hardware. 20 fps is already very low, and won't be enjoyable for shooters. > If you meant "compressed", then it is completely fine to compress data on > the CPU. I remain unconvinced... a demo would answer these questions what cpu utilization is actually done. There is hope though for these kinds of renderer's at least for the cpu... because of multi-core... However for now systems only have 1 memory and therefore the memory will be the bottleneck even on multi-core systems. The proof is in the pudding ! ;) :) Bye, Skybuck :)
From: Skybuck Flying on 26 May 2010 18:49 Oh I forgot to mention another important problem with these shaders and custom renderers: "Re-usability of code". It will probably be very hard/difficult to re-use this code. An API would be much easier to re-use. So in another words: Shaders do not seem to be the most re-usable codes... especially CG seems to be lacking the "units/objects" concepts. Could be a big obstacle. Bye, Skybuck.
First
|
Prev
|
Pages: 1 2 3 4 5 6 Prev: FAKE CONFERENCE 2nd call - INFORMATICS 2010: submissions until 31 May 2010 Next: Micro Men on BBC4 today 22:30 |