Wednesday, February 20, 2013

iOS Note: Panoramic Hotspots

This is just a note on some of the solutions I've implemented for features on a project we are doing at my job.

So we have a panoramic viewer on the iPad. There is an image mapped to a sphere and a camera in the center of the sphere. You can rotate around and zoom in on things.

The designers wanted the functionality to click on specifically designated areas to trigger events. For instance, touch this painting and you get info on the piece.

In the legacy code I inherited for the project( it was already roughed out in prototype ), these triggers were detected via point-radius calculations. Here is an XYZ-position, here is it's radius, do BLAH when a touch hits in that radius. This works fine to some level of accuracy but they quickly found that they needed not only more accuracy but a more flexible way for other designers on the team to author these areas without reaching into the compiled code.

So they asked me, and I said to move the problem from model space to image space. Drop the positions and distances and just work with the pixels since that's what you are interacting with anyway.

My solution was basically to modify the rendering pipeline a bit, and to add a message queue.

For the pipeline, setup framebuffer rendering so that you aren't only writing to the screen but also to an offscreen buffer. Note that this will be a framebuffer not a renderbuffer. What do we draw to this offscreen buffer? We will have the artists/designers copy the panoramic texture and draw, in solid 1 color colors (no gradients or whatnot) the areas they would like to respond to touch. They will paint on their touch areas. With rgb 255-255-255 you get around 16,000,000 colors? That's 16 million unique touch areas. Plenty.

Now you can just glReadPixels a single pixel at the touch screen position from the offscreen bufer to get the color they touched.

What I do is, on a touch, I push a "request" for a sampling into queue-A. Queue-Ais the "request mailbox", the renderer's "inbox". The renderer checks this when it's submitting render calls and if it see's any requests in it's inbox it it calls glReadPixels and puts the result into queue-B, the "response mainbox", this is the renderer's "outgoing box". The rest of the application checks this outgoing box regularly to see if anything is in it. It will take any mail from it, which will be an rgb color, and will check a table it has to see what event trigger given the color. And that is it.

The users don't see this rainbow collision map because it's drawn offscreen. You don't have to transform any coordinates because all of that is done in the shaders. The transforms are the exact same as those performed on the screen image also so the collisions match up 1-1 regardless of any zoom or w/e features you do. The designers get per-pixel accuracy. It's a solid solution (has been so far at least ).

Caveats include not working in the simulator. I believe this is because the technique is hardware accelerated and thus dependent on. well. hardware and not just simulated architecture.

No comments:

Post a Comment