Friday, August 27, 2010
Wednesday, August 11, 2010
You know whoo you are... and it is your fault :P...
#define VEC3toSIMD(v1,v2) v2.fields.x=(v1).x; v2.fields.y=(v1).y; v2.fields.z=(v1).z;
#define VEC4toSIMD(v1,v2) v2.fields.x=(v1).x; v2.fields.y=(v1).y; v2.fields.z=(v1).z; v2.fields.w=(v1).w;
#define COPYVEC3(v1,v2) (v2)->x=(v1)->x; (v2)->y=(v1)->y; (v2)->z=(v1)->z;
#define COPYVEC4(v1,v2) (v2)->x=(v1)->x; (v2)->y=(v1)->y; (v2)->z=(v1).z; (v2)->w=(v1)->w;
#define PRINT_VEC(v) printf("\n(vector) x= %f, y= %f, z = %f\n", (v).x, (v).y, (v).z);
#define PRINT_SVEC(str,v) printf("\n%s x= %f, y= %f, z = %f\n", str, (v).x, (v).y, (v).z);
#define PRINT_VEC2(v) printf("\n(vector) x= %f, y= %f\n", (v).x, (v).y);
#define PRINT_SVEC2(str,v) printf("\n%s x= %f, y= %f\n", str, (v).x, (v).y);
#define PRINT_VEC4(v) printf("\n(vector) x= %f, y= %f, z = %f, w = %f\n", (v).x, (v).y, (v).z, (v).w);
#define PRINT_SVEC4(str,v) printf("\n%s x= %f, y= %f, z = %f, w = %f\n", str, (v).x, (v).y, (v).z, (v).w);
#define PRINT_BULLETVEC(str,v) printf("\n%s x= %f, y= %f, z = %f, w = %f\n", str, (v).x(), (v).y(), (v).z(), (v).w());
#define MUL_FVEC3(t,v) (v)->x *= t; (v)->y *=t; (v)->z *= t;
#define MUL_FVEC4(t,v) (v)->x *= t; (v)->y *=t; (v)->z *= t, (v)->w *= t;
#define ADD_FVEC3(t,v) (v)->x += t; (v)->y +=t; (v)->z += t;
#define ADD_FVEC4(t,v) (v)->x += t; (v)->y +=t; (v)->z += t, (v)->w += t;
#define SUB_FVEC3(t,v) (v)->x -= t; (v)->y -=t; (v)->z -= t;
#define SUB_FVEC4(t,v) (v)->x -= t; (v)->y -=t; (v)->z -= t, (v)->w -= t;
#define INIT_OBJ_ARR_NULL(arr, len) {for(int i=0; i<len; i++) {arr[i] = NULL;}}
Tuesday, August 10, 2010
Monday, August 09, 2010
You know when pointers give you too much rope to hang yourself with when...
... you see this in your console's log:
malloc: *** error for object 0xd663fb0: incorrect checksum for freed object - object was probably modified after being freed.
*** set a breakpoint in malloc_error_break to debug
That's neat ;).
Monday, August 02, 2010
LLVM news... Xcode finally leaving Eclipse in the dust ;).
I am watching a presentation on Xcode 4 and that got me thinking about the revolution that the LLVM project keeps bringing to the table as it matures. It's not the C++ support or other features per-se, but how integrating it with the IDE brings a revolution to compiled languages like Objective-C, C, and C++... kind of levelling the playing field with Interpreted and bytecode based languages (like Java) as well as languages that are supported by the .NET runtime.
If you have ever used Eclipse to develop Java applications, you are accustomed to how much the IDE knows about your project and how it can help you fix your mistakes as well as double check what you type in real time. This is the next step after code completion arrived. With Java you do not do a big BUILD phase after writing your code, the bytecode is produced when you save each file and the Java compiler working alongside the IDE can tap into a very information rich soup the Java bytecode provides for your whole application at any given time.
In short, having a VM based language (like you have with Java and .NET), allows you not only to get closer to the "write once, run anywhere" mantra and to have lots of info available when the app runs on the target platform, but it helps you with development as well... you have direct and real-time feedback after every change you make in the application's source code and of the implications of a change in one file can have in other files... before you debug it on a simulator or a device... and we freed ourselves from header files :P (well, going from C to Java, you will appreciate not to have to declare any function before implementing it and using it without fear of "oops, I need to use function f2 inside function f1... how do I do that?" situations). Ok... ok... there are good things about header files :). The disadvantage about those kind of languages is the same VM that helps us in other occasions, it is the thing that can keep managed languages behind purely compiled languages behind in terms of efficiency and speed. The more you do at runtime in addition to the work your app tries to do, the lesser you can do in your own app as CPU time is a fixed quantity.
In OSS implementations, Java already gained the ability to compile the code ahead of time (AOT) and the same can be done with managed languages running on the .NET platform, thus effectively giving developers better and smarter tools that can assist their development work (which should lead to less bugs in their source code). The exciting thing is doing the opposite, bringing the features of VM based languages to languages designed to be statically compiled (as opposed to dynamically compiled... JIT is a form of dynamic "just in time" compilation).
Inserting the LLVM block in the middle of the road between source code and binary form allows this "best of both world" kind of proposition to happen. Instead of compiling your code to a binary form directly, a form which might lose several information that would help you to find and fix bugs in your program, your code is analyzed, decomposed, and adapted to a form designed to run on this simpler low level universal target (the virtual machine). Here, the compiler carries on as much information on the original file and what it tries to achieve as possible as well as knowing the intricate details of this virtual machine our "intermediate" representation of the code we wrote was "compiled" for. In theory, if you developed a C++ "for Fedora Core 12 as released on launch day" runtime you might develop a very similar solution without the help of any VM, but chances are that you do not want to do this much effort every time a piece of the OS changes and that you might want to run your program on several systems... maybe just by compiling it again without porting the complex and tightly system coupled runtime to every different mix of kernel, system libraries, etc... you find. Don't get me wrong, the C++ runtime is not exactly platform agnostic now (the libc you got on your system was compiled to run on it, for example), but it is much more of a lightweight approach than what Java wants for example.
When you take advantage of LLVM in one of the supported languages you allow the compiler to do a much richer optimization and error checking work because it has a lot of information on not only each single file, but on the program as a whole as well as the source code which created it (as you can clearly see by using the Clang's static analyzer integrated into Xcode 3.x).
Example of one thing that changed in Objective-C thanks to the use of LLVM and the modern Objective-C runtime:
Instead of adding an ivar in your class declaration, then declaring the property for it, then synthesize it in the implementation file... you can simply declare a property and the other two steps will be done for you (@synthesize by default).
Now, imagine the IDE having full access to that information by using LLVM technology inside all its code completion, code checking, code editing, etc... needs and what such an integration can yield.Xcode 4's Fix-it is one of those features. I basically gave up on having smart code completion and troubleshooting tools when not dealing with Java... seeing Xcode 4's code highlighting that warned of an error and a pop-over with details about the error and a solution or a workaround to it was something that made my jaw drop :D!
I will stop now with the huge list of inaccurate comments and technical explanations I have probably built up so far and I cannot wait to be able to write more without hitting NDA issues (all I have talked about is available in the public domain).
Good afternoon everyone :)!