Wednesday, February 20, 2013

iOS Note: Panoramic Hotspots

This is just a note on some of the solutions I've implemented for features on a project we are doing at my job.

So we have a panoramic viewer on the iPad. There is an image mapped to a sphere and a camera in the center of the sphere. You can rotate around and zoom in on things.

The designers wanted the functionality to click on specifically designated areas to trigger events. For instance, touch this painting and you get info on the piece.

In the legacy code I inherited for the project( it was already roughed out in prototype ), these triggers were detected via point-radius calculations. Here is an XYZ-position, here is it's radius, do BLAH when a touch hits in that radius. This works fine to some level of accuracy but they quickly found that they needed not only more accuracy but a more flexible way for other designers on the team to author these areas without reaching into the compiled code.

So they asked me, and I said to move the problem from model space to image space. Drop the positions and distances and just work with the pixels since that's what you are interacting with anyway.

My solution was basically to modify the rendering pipeline a bit, and to add a message queue.

For the pipeline, setup framebuffer rendering so that you aren't only writing to the screen but also to an offscreen buffer. Note that this will be a framebuffer not a renderbuffer. What do we draw to this offscreen buffer? We will have the artists/designers copy the panoramic texture and draw, in solid 1 color colors (no gradients or whatnot) the areas they would like to respond to touch. They will paint on their touch areas. With rgb 255-255-255 you get around 16,000,000 colors? That's 16 million unique touch areas. Plenty.

Now you can just glReadPixels a single pixel at the touch screen position from the offscreen bufer to get the color they touched.

What I do is, on a touch, I push a "request" for a sampling into queue-A. Queue-Ais the "request mailbox", the renderer's "inbox". The renderer checks this when it's submitting render calls and if it see's any requests in it's inbox it it calls glReadPixels and puts the result into queue-B, the "response mainbox", this is the renderer's "outgoing box". The rest of the application checks this outgoing box regularly to see if anything is in it. It will take any mail from it, which will be an rgb color, and will check a table it has to see what event trigger given the color. And that is it.

The users don't see this rainbow collision map because it's drawn offscreen. You don't have to transform any coordinates because all of that is done in the shaders. The transforms are the exact same as those performed on the screen image also so the collisions match up 1-1 regardless of any zoom or w/e features you do. The designers get per-pixel accuracy. It's a solid solution (has been so far at least ).

Caveats include not working in the simulator. I believe this is because the technique is hardware accelerated and thus dependent on. well. hardware and not just simulated architecture.

Tuesday, February 19, 2013

UDK Note: Take Damage Events

<This note is intended for people who know about UDK Kismet TakeDamageEvents but can't seem to get the thing to work>...

Let's say you want to shoot a light to turn hide a light shaft, or shoot a barrel to make it explode into flames.

Right click your object to shoot, and if it's a static mesh convert it to a mover (Convert>Mover). With the object selected go to it's properties (press F4), go to Collision, set "Collision Type" to "Block All".

Now your object should get hit and the event's target should process.

Moral of the story is to make sure your static meshes are at least movers with block all collision so the take damage event triggers.

Wednesday, February 6, 2013

How To: Obj-C Set Mouse Position

Reference: http://stackoverflow.com/questions/8059667/set-the-mouse-location

Code:
 CGEventSourceRef source = CGEventSourceCreate(kCGEventSourceStateCombinedSessionState);
  CGEventRef mouse = CGEventCreateMouseEvent (NULL, kCGEventMouseMoved, CGPointMake( X, Y), 0);
  CGEventPost(kCGHIDEventTap, mouse);
  CFRelease(mouse);
  CFRelease(source);
Include:
 #include <ApplicationServices/ApplicationServices.h>

You just set the X and Y.

Tuesday, February 5, 2013

UDK How To: How to detect Alternative Fire on Custom Weapons

Alt fire is when you press the right mouse button to shoot.

For instance, the Link Gun in UT has an alt fire of a beam.

If you are creating a Custom weapon you can access this functionality quite easily.

Note that there are other ways to do this (such as creating your own exec function for the "DefaultInput.ini" file to refer to). This allows you to access alt-fire without modifying anything other than your own Custom Weapon Class.

You will need to work with:

  • Your Weapon script
You may read the following scripts:

  • PlayerController.uc
  • Pawn.uc
  • Weapon.uc
  • DefaultInput.ini


TL;DR
You will go to your weapon and overwrite the function:
  • simulated function StartFire(byte FireModeNum)
FireModeNum 0 means primary, 1 means alt.

So you can have functionality execute or not execute depending on the 0 or 1 value.

SOME EXPLINATION
If you check out the UDK\UDKGame\Config\DefaultInput.ini, you can scroll around until you find the player input for alt fire:
  • .Bindings=(Name="GBA_AltFire",Command="StartAltFire | OnRelease StopAltFire")
We can see that an exec function get's called to handle the business, "StartAltFire ".

This means in the PlayerController script this function get's called.

When we check out the script ourselves we see that it then calls:
  • Pawn.StartFire( FireModeNum );
In Pawn.uc, we see this in turn calls:
  • Weapon.StartFire(FireModeNum);
...which is what we took advantage of.

Do you see? By tracing the desired info from the input we were able to find the scripts related to our result and utilize the fuck out of the necessary functions.

This can be a good way to debug your UDK designs (that is, to find out what is going on this huge system).