If you read my last post you know that I’m working on a game for iPhone. As part of my experimentation, I ran across a small issue I thought I’d share.
Part of my game play involves the user touching objects on the screen which can be fairly small. In my light testing on my iPhone (along with some brief play testing done by my wife) it proved hard to hit within those objects. I added some code to draw an ‘X’ where the last touch occurred in order to track this. Generally it seemed that the both of us were hitting down and to the right of the target area. As we are both right-handed, I would guess that this it because our aim is guided largely by the position of our fingertip while the touch area is back from that at the finger pad.
The inaccuracy is not a surprise, I suppose. Most standard user interface elements have a reasonably large target area and are at least static positionally, so this generally isn’t a problem in typical applications. In the height of game play, however, you don’t want to require touches to be too accurate, or you at least want the touches to match the position that the user expect the touch to occur. Otherwise, the player will become frustrated and will hate your game!
So, my solution? One of my considerations in my game design was to put my objects in a grid, rather than placing them in arbitrary locations on the screen. As I implement this, I will start with registering a click on an object when the user clicks on the grid cell that contains it. As it looks so far, the ‘X’ I’m drawing for the last touch location appears to show touches in the intended grid cell. That’s a good thing. My one concern is that using the grid cell as a proxy for touching an object makes it too easy to touch small objects. If that’s case, I’ll make touches require less accuracy by allowing a few pixels of slop outside the boundaries of the game object.
Has anybody else experience this in their game design? How did you solve it?