Wednesday, May 1, 2013

Spidermonkey Nightly [March 1st, 2013]: Windows7 VS2012

I built and packaged the dependencies for playing with spidermonkey (I think it's ionmonkey at the moment) within VS2012 on Windows 7.

I'm not sure if it's 32 or 64 but I basically built the Firefox nightly from source and then pulled out the Spidermonkey stuff for use.

Hopefully it runs for you right away! Took me a while to get it right:
https://bitbucket.org/brownw1/spidermonkey_vs2012

Thursday, April 18, 2013

Why singletons are "Bad Patterns"

I made a point for my new years resolution of 2013 to use more design patterns. I wanted to start making more conscious and judicious design decisions and to stop "slinging code".

So I started using the singleton pattern for things like caches and input in my projects.

Of course, I then find out that these convenient things are "bad practice" and that I should stop using them. I didn't really get why they are bad though until I crawled around some Stack Overflow posts on the topic.

What it boils down to is...

  1. Singletons hide dependencies
  2. Singletons make it hard to test
  3. Singletons aren't automatically cleaned up and are thus liabilities
I really do enjoy the concept of keeping things scoped so that they can just get destructed and neatly go away.

That being said, a decent "solution" to the "problem" seems to be to create your potential singleton object high up in the design hierarchy as a scoped object. Then, just have every object below it in the hierarchy that needs it receive a reference to it in their construction.

Doing that, you have a clear dependency, a scoped object, and defined initialization order (I believe that's the right way to say it?).

Maybe you could just define a base class that constructs to take reference of potential singleton resources like a cache manager or input manager. Then, the dependency would just be a way normal thing in your design.

Monday, March 4, 2013

GLFX Build with VS2012

If you get an error about a version 1600 not being 1700 when you use MSVC2012 to build a project with GLFX, then it means you need to compile GLFX for yourself.

So, here is how I did it (like 1 minute ago).


  1. Grab the glfx source.
  2. Open the vs2010 project in vs2012.
  3. In the solution, right click on the bold project "glfx".
  4. Goto "Resources".
  5. Note the project referenced, "glew_static".
  6. You'll need to get the glew source and add the project to the glfw solution, then add it as a reference.
  7. So, remove the old glew static reference inside of glfx references.
  8. Add your own, "my_glu\build\vc10\glew_static.vcxproj".
  9. Add a glfx reference to the project you just added to the solution.
  10. Your build should now work.
  11. Goto glfx/Debug (or Release if you built that?), and grab a copy of the .lib to use in your project.
Whamo, that should do it.

Wednesday, February 20, 2013

iOS Note: Panoramic Hotspots

This is just a note on some of the solutions I've implemented for features on a project we are doing at my job.

So we have a panoramic viewer on the iPad. There is an image mapped to a sphere and a camera in the center of the sphere. You can rotate around and zoom in on things.

The designers wanted the functionality to click on specifically designated areas to trigger events. For instance, touch this painting and you get info on the piece.

In the legacy code I inherited for the project( it was already roughed out in prototype ), these triggers were detected via point-radius calculations. Here is an XYZ-position, here is it's radius, do BLAH when a touch hits in that radius. This works fine to some level of accuracy but they quickly found that they needed not only more accuracy but a more flexible way for other designers on the team to author these areas without reaching into the compiled code.

So they asked me, and I said to move the problem from model space to image space. Drop the positions and distances and just work with the pixels since that's what you are interacting with anyway.

My solution was basically to modify the rendering pipeline a bit, and to add a message queue.

For the pipeline, setup framebuffer rendering so that you aren't only writing to the screen but also to an offscreen buffer. Note that this will be a framebuffer not a renderbuffer. What do we draw to this offscreen buffer? We will have the artists/designers copy the panoramic texture and draw, in solid 1 color colors (no gradients or whatnot) the areas they would like to respond to touch. They will paint on their touch areas. With rgb 255-255-255 you get around 16,000,000 colors? That's 16 million unique touch areas. Plenty.

Now you can just glReadPixels a single pixel at the touch screen position from the offscreen bufer to get the color they touched.

What I do is, on a touch, I push a "request" for a sampling into queue-A. Queue-Ais the "request mailbox", the renderer's "inbox". The renderer checks this when it's submitting render calls and if it see's any requests in it's inbox it it calls glReadPixels and puts the result into queue-B, the "response mainbox", this is the renderer's "outgoing box". The rest of the application checks this outgoing box regularly to see if anything is in it. It will take any mail from it, which will be an rgb color, and will check a table it has to see what event trigger given the color. And that is it.

The users don't see this rainbow collision map because it's drawn offscreen. You don't have to transform any coordinates because all of that is done in the shaders. The transforms are the exact same as those performed on the screen image also so the collisions match up 1-1 regardless of any zoom or w/e features you do. The designers get per-pixel accuracy. It's a solid solution (has been so far at least ).

Caveats include not working in the simulator. I believe this is because the technique is hardware accelerated and thus dependent on. well. hardware and not just simulated architecture.

Tuesday, February 19, 2013

UDK Note: Take Damage Events

<This note is intended for people who know about UDK Kismet TakeDamageEvents but can't seem to get the thing to work>...

Let's say you want to shoot a light to turn hide a light shaft, or shoot a barrel to make it explode into flames.

Right click your object to shoot, and if it's a static mesh convert it to a mover (Convert>Mover). With the object selected go to it's properties (press F4), go to Collision, set "Collision Type" to "Block All".

Now your object should get hit and the event's target should process.

Moral of the story is to make sure your static meshes are at least movers with block all collision so the take damage event triggers.

Wednesday, February 6, 2013

How To: Obj-C Set Mouse Position

Reference: http://stackoverflow.com/questions/8059667/set-the-mouse-location

Code:
 CGEventSourceRef source = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
  CGEventRef mouse = CGEventCreateMouseEvent (NULL, kCGEventMouseMoved, CGPointMake( X, Y), 0);
  CGEventPost(kCGHIDEventTap, mouse);
  CFRelease(mouse);
  CFRelease(source);
Include:
 #include <ApplicationServices/ApplicationServices.h>

You just set the X and Y.

Tuesday, February 5, 2013

UDK How To: How to detect Alternative Fire on Custom Weapons

Alt fire is when you press the right mouse button to shoot.

For instance, the Link Gun in UT has an alt fire of a beam.

If you are creating a Custom weapon you can access this functionality quite easily.

Note that there are other ways to do this (such as creating your own exec function for the "DefaultInput.ini" file to refer to). This allows you to access alt-fire without modifying anything other than your own Custom Weapon Class.

You will need to work with:

  • Your Weapon script
You may read the following scripts:

  • PlayerController.uc
  • Pawn.uc
  • Weapon.uc
  • DefaultInput.ini


TL;DR
You will go to your weapon and overwrite the function:
  • simulated function StartFire(byte FireModeNum)
FireModeNum 0 means primary, 1 means alt.

So you can have functionality execute or not execute depending on the 0 or 1 value.

SOME EXPLINATION
If you check out the UDK\UDKGame\Config\DefaultInput.ini, you can scroll around until you find the player input for alt fire:
  • .Bindings=(Name="GBA_AltFire",Command="StartAltFire | OnRelease StopAltFire")
We can see that an exec function get's called to handle the business, "StartAltFire ".

This means in the PlayerController script this function get's called.

When we check out the script ourselves we see that it then calls:
  • Pawn.StartFire( FireModeNum );
In Pawn.uc, we see this in turn calls:
  • Weapon.StartFire(FireModeNum);
...which is what we took advantage of.

Do you see? By tracing the desired info from the input we were able to find the scripts related to our result and utilize the fuck out of the necessary functions.

This can be a good way to debug your UDK designs (that is, to find out what is going on this huge system).


Friday, January 25, 2013

Chrome local file access

I've been doing some things at work on a webpage for Chrome Desktop, Chrome Android and Safari. We basically need to present the same 3D content across all of these browsers.

When testing Chrome, for security reasons you cannot run pages that will load local content. There are several fixes to this. The quickest is to run Chrome with a specific argument to disable the security feature. I won't describe that fix as it isn't a good idea.

No, the good idea is to run the site you're working on via a local server. It's also very easy to do.


  • Download the latest version of Python 2.X
  • Open up the command prompt or terminal and go to the folder with the site's index.html.
  • Once in this folder, run the command [python -m SimpleHTTPServer].
  • Your folder's contents will be hosted at the web address [http://localhost:8000].
And that's it! You can safely dev your site on this python local host now. When you are done just close the terminal (or press ctrl+c in the terminal).

Remember to launch the server from the terminal once it is in the folder with the site's index.html.

Friday, January 18, 2013

OpenGL explained: Framebuffer Object & Renderbuffer

Upon looking around for some info on an issue I'm currently having, I found the following posted by user "Nicol Bolas" on StackOverflow. I've read a few of this user's posts and they are quite knowledgeable on OpenGL related things.

Also, Nicol Bolas is the name of the chaos dragon from Magic The Gathering. Coincidence? Lol.

Anyway, the description is quite concise in my opinion.


framebuffer object is a place to stick images so that you can render to them. Color buffers, depth buffers, etc all go into a framebuffer object.
renderbuffer is like a texture, but with two important differences:
  1. It is always 2D and has no mipmaps. So it's always exactly 1 image.
  1. You cannot read from a renderbuffer. You can attach them to an FBO and render to them, but you can't sample from them with a texture access or something.
So you're talking about two mostly separate concepts. Renderbuffers do not have to be "for depth testing." That is a common use case for renderbuffers, because if you're rendering the colors to a texture, you usually don't care about the depth. You need a depth because you need depth testing for hidden-surface removal. But you don't need to sample from that depth. So instead of making a depth texture, you make a depth renderbuffer.
But renderbuffers can also use colors rather than depth formats. You just can't attach them as textures. You can still blit from/to them, and you can still read them back with glReadPixels. You just can't read from them in a shader.
http://stackoverflow.com/questions/9850803/glsl-renderbuffer-really-required

So you can't really use a renderbuffer for anything yourself. You put it in there so the system has a place to write things.

The framebuffers are for things you plan to work with.