« Posts under 3D

Molehill API-Details

I just finished watching the new video about API-Design of Molehill on AdobeTV, hosted by Sebastian Marketsmueller.
I’d like to give you conclusion here:
First thing I have to tell, is that most developers won’t even want to “put their hands on Molehill”. Why? That’s quite simple. Most developers want to display 3D-content with ease and without having to read a bunch of books before they can use the technology. It is so that Molehill really requires not only advanced or professional programming skills – you’ll have to have coding in your veins! I for myself will really enjoy coding with Molehill – coding has just embedded in my DNA ;)

So what is this guy talking all about?
There’s still a drawTriangles-method in Molehill, but as you will have expected, it’s not as easy as the old one. As a matter of fact, there will only be a few calculations that will have to be done inside AS3, like building trees if you like them. All calculations concerning transformation, rotation, scaling, etc. is performed directly in the graphics hardware.
I thought they built a big wrapper around everything, so you’d only have a few options, but my thoughts were fortunately wrong. You have all the controls over the graphics device that a C++-Developer has (or nearly). The object-data won’t be saved in Flash’s memory, ’cause everything is being hold in the graphic cards RAM.
To actually work with the Molehill API, you’ll have to write Assembler-Code. Yes, that’s what I wrote: Assembler-Code. Some of you might have seen those codes like

mov eax, ecx
jmp 0x475927

If you want to learn this, you’ll probably figure out that it’s not as difficult as it seems to be. The asm you write for Molehill is being compiled to byteCode, then stored as byteArray and will finally be uploaded to the graphics card via the graphics pipeline.
The API contains only 20 OP-Codes that you should be familar with. If you’ve done 3D-mathematics on your own, it should be self-explanatory.
What will be delivered to the graphics adapter is called TokenStream. Each Token inside that stream is of fixed size: 192 Bits.
It starts with the OP-Code [32 Bits], followed by the Destination [32 Bits], then followed by SourceA and SourceB, each of 64 Bits. Are you still with me? If you answered no, just forget about reading any further… :D

Do you like matrices? I hate them! However, most engines use them and the API has a native OP-Code to do matrix transformations. There are m44, m34 and m33. These are OP-Codes used for matrices 4×4, 3×4 or 3×3.
Let’s say we wan’t to transform mtx2 onto mtx1 and we want to save the equation in output. What we need to do is just simply write Assembler-Code:

m44 output, mtx1, mtx2

Quite simple, eh?
I think about all the possibilities with this API. I’ll have to do a lot of benchmarking when I’ve access to the API. I don’t have any answer from Adobe yet, but even if the alternativa-people have a big head-start, we will be able to get close to them, also if we have to wait for the beta. With the new API, it’s sooo easy and fast to create an engine, I just can’t believe it :) You might think different.

The manner in which objects are hold in the graphics card is totally different than that of every 3D-Engine, including noob3D. I know this hierarchy:
Object -> faceList -> Face -> vertices
so every face has it’s own vertices directly in that class. the graphics hardware just has a buffer with all vertices in it. if you want to know a vertex, you’ll have to know it’s index position.

The readers that have not closed the page yet, may be interested in the session by Sebastian Marketsmueller:

Still wanna put your hands on it? Questions and discussion are welcome.

More information will be made public here, when I’ve seen the other movies. They will also be available at bytearray.org. Maybe Thibault writes some more info on this.