Spriter implementation for FlatRedBall

For over the past year, I have been working on an API and plugin to use in the FlatRedBall engine that makes it dead simple to load, play, and manipulate Spriter animations in your FlatRedBall games. The implementation is written as an extension to the FlatRedBall engine, so you get all the goodness that comes from using first-class objects that the engine understands.

A few features that other Spriter implementations may not have:

  • Positioning in 3D space
  • Scaling the entire animation
  • Setting animation speed
  • Reversing animation playback (negative speed)
  • Rotating an animation on any axis (X/Y/Z)
  • Cloning animations
  • Playing every entity in an animation file simultaneously via SpriterObjectCollection
    • Every one of the features above works on SpriterObjectCollection as well

This is just a subset of the features in my Spriter implementation. If you are interested, install the FlatRedBall development kit, and head over to the releases section to get the latest release of my plugin. It’s a simple installation process into the Glue tool you get in FRBDK. Follow the tutorials, and you’ll be creating and using Spriter animations in a real game in no time!

Test Driven Development (TDD) vs. Traditional Testing

TDD can get you pretty far, but integration testing is necessary.

The subject of this post is a bit of a misnomer, because the two are not mutually exclusive. That is to say, test driven development is not a replacement for testing. In fact, test driven development has less to do about testing than it does about design. TDD should drive your class design in such a way that makes it easier to get to the real testing phase. There are certain cases that are not going to be apparent during the initial class design, so when developing any application, testing should begin as soon as possible. TDD Can get you there faster, because a lot of the pieces of the application can be faked since the design is testable!

Anything that gets mocked in a unit test can be faked in a built and deployed application, so business users, UX specialists, and designers will get a chance to play with the app to tweak requirements very early in the process! This early change is a lot less risky than late change when an application is “done” because the whole system is less complex.

Take for example my most recent project for which I am using TDD. It is a gesture library for desktop games using XNA (not windows 8 store games). I created a GestureProvider object which relied on an ITouchEventProvider interface to receive raw TouchEvent objects and return GestureSample objects. Using just four conceived objects, I wrote some simple tests that would prove Tap, Drag, and Pinch gestures could be detected given the proper touch events.

The tap test went something like…

[TestMethod]  
public void single_tap_registers_from_one_touch()  
{  
    // Given a mock ITouchEventProvider that returns the following
    _container  
        .Arrange<ITouchEventProvider>(p => p.Events)  
        .Returns(new List  
        {  
            new TouchEvent  //    touch down at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Down,  
                TimeStamp = DateTime.Now  
            },   
            new TouchEvent  //    touch up at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Up,  
                TimeStamp = DateTime.Now.AddMilliseconds(200.0)  
            }  
        });  

    var gestureProvider = _container.Instance;  

    // Get gestures from the real GestureProvider object (system under test or SUT)
    var samples = gestureProvider.GetSamples();  

    // Assert that there is one GestureSample object for a Tap at 0,0 
    var gestureSamples = samples.ToList();  
    Assert.AreEqual(1, gestureSamples.Count);  

    var tap = gestureSamples[0];  
    Assert.AreEqual(Vector2.Zero, tap.Delta);  
    Assert.AreEqual(Vector2.Zero, tap.Delta2);  
    Assert.AreEqual(Vector2.Zero, tap.Position);  
}

I did that test, and another for Drag and Pinch. Everything seemed to be going so well that I wanted to test them out, because I had a sneaking suspicion that I was missing something. I wrote up a quick test for a real ITouchEventProvider implementation that would use an interop library to listen for events, and provide them to the GestureProvider. I fired up a real game and added the necessary code to use the GestureProvider. I noticed one thing right away: Tap was not registering as a tap, but instead it was a drag. I double checked my tests, and it all looked ok, so I had to debug a bit. Eventually I found that my assumption about what events would fire for a tap was flawed. There could be any number of “move” events between “down” and “up”. I made the quick fix to add one move event to the test arrangement and fixed the GestureProvider so that the test passed, and then it worked. This proves that integration testing is a very important step in any system.

My unit test alone did not make the whole system work, but via TDD, I had a designed the classes such that there was a clear path to fix the test so that it satisfied the real-world scenario instead of the erroneous assumption I made. Chalk up another TDD win!

Multi touch and gesture input in Windows with DPI scaling

I mentioned in a previous post that I am working on a new project related to gesture input. That very day, I hit a wall regarding desktop scaling, and last night I broke through it! Perhaps a topic for another post: with some applications, TDD can get you to a certain point, but integration testing is a must pretty early on.

The FRBTouch project is no exception! There are a few different problems to solve with this project:

  • Touch event capturing
  • Gesture detection
    • Taking touch events and making gestures
    • e.g. One touch event down then up is a tap
  • Coordinate translation
    • Taking window coordinates and translating them in an application (e.g. a FlatRedBall game)

The first two bullet points turned out to be the easiest, because they were mockable. For instance:

        public void single_tap_registers_from_one_touch()
        {
            // Arrange
            _container
                .Arrange<ITouchEventProvider>(p => p.Events)
                .Returns(new List<TouchEvent>
                {
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Down,
                        TimeStamp = DateTime.Now
                    },
                    new TouchEvent
                    {
                      Id = 1,
                      Position = Vector2.Zero,
                      Action = TouchEvent.TouchEventAction.Move,
                      TimeStamp = DateTime.Now.AddMilliseconds(10)
                    },
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Up,
                        TimeStamp = DateTime.Now.AddMilliseconds(200.0)
                    }
                });
            var gestureProvider = _container.Instance;

            // Act
            var samples = gestureProvider.GetSamples();

            // Assert
            var gestureSamples = samples.ToList();
            Assert.AreEqual(1, gestureSamples.Count);

            var tap = gestureSamples[0];
            Assert.AreEqual(Vector2.Zero, tap.Delta);
            Assert.AreEqual(Vector2.Zero, tap.Delta2);
            Assert.AreEqual(Vector2.Zero, tap.Position);
        }

That’s the test that proves a tap gesture is detectable given how the touch events are provided. It was easy to setup a mock scenario for drag and pinch as well, and just assert the required gesture return values. The TouchEvent object maps pretty closely to the events that User32.dll provides, also, so there wasn’t that much to test for actually capturing events.

The major problems came when attempting to translate coordinates from touching an XNA game window into world coordinates. I use a surface pro for all development, and it is pretty much a necessity to have 150% scaling on at all times, because the size of the screen is small. Windows scales all windows up, but in doing so it breaks the coordinate system for touch input. This is not something you can see or solve with test driven development (at least not traditional unit tests), because it requires a live scaled window and graphics object to operate.

To solve the problem, one simply has to disable the auto scaling, and tell Windows that the application will handle understanding the DPI settings. You have to make your application DPI Aware. (More info). The window will then not auto-scale, and the coordinate system will not be broken, so normal translation routines will work.

Touch Input in Windows 7 or 8 desktop mode.

I am the proud owner of a Surface Pro 128GB, a Lumia 822, and I had an HTC Trophy Windows Phone 7 when it first came out. I have been an XNA enthusiast since about 3.1, and I have blogged about input management before. I only mention this because I am a fan of the Gesture API that the XNA team released for WP7. A decision they made, though, was to release that API only for WP7 and left it out of the XNA 4 PC libraries. I decided to write a library to do gesture sampling in a very similar way!

I am excited to announce a new project I am working on! Though named FRBTouch (because eventually it will aim to integrate with the FlatRedBall engine), will have components that will work on any system with the XNA libraries installed. It already detects gestures like tap, freedrag, and pinch, providing GestureSample objects that are identical to the XNA WP7 implementation!

Here is some example code that already uses the library in an XNA game:

    public partial class TouchScreen
    {
        private GestureProvider _gestureProvider;

        void CustomInitialize()
        {
            _gestureProvider = new GestureProvider(new QueueingTouchEventProvider(FlatRedBallServices.Game.Window.Handle));
        }

        void CustomActivity(bool firstTimeCalled)
        {
            var gestures = _gestureProvider.GetSamples();

            if (gestures != null)
            {
                foreach (var gestureSample in gestures)
                {
                    switch (gestureSample.GestureType)
                    {
                        case GestureType.Tap:
                            FlatRedBall.Debugging.Debugger.CommandLineWrite("Tap");
                            break;
                        case GestureType.FreeDrag:
                            FlatRedBall.Debugging.Debugger.CommandLineWrite("Drag");
                            break;
                        case GestureType.Pinch:
                            FlatRedBall.Debugging.Debugger.CommandLineWrite("Pinch");
                            break;
                        case GestureType.DragComplete:
                            FlatRedBall.Debugging.Debugger.CommandLineWrite("DragComplete");
                            break;
                        case GestureType.PinchComplete:
                            FlatRedBall.Debugging.Debugger.CommandLineWrite("PinchComplete");
                            break;
                    }
                }
            }
        }
    }

Think of the CustomInitialize function as the constructor,and the CustomActivity function as the game loop (this is just how Glue from FRBDK organizes a screen).

I hope to be completing out the gestures and adding flags to enable and disable certain gestures as time goes on!

Separation of responsibilities: Ability, AbilityEffect, and EffectManager

After talking with @Sunflash93 on twitter (find their blog here), I got to thinking I should post about how I designed the ability system in my game Z-Com. See my previous blog post for more info about the game.

In my coding adventures, I like to attempt to stick to the SOLID principles of OOD as outlined here. Namely, my favorite and arguably the easiest of the principles to smell is the Single Responsibility Principle. In short, A class should have one, and only one, reason to change. This can and should be applied to all facets of OO programming, including game development.

When first trying to flesh out the details of how I would do abilities, I knew I wanted a few things:

  • Single target abilities
  • AOE/Multi target abilities
  • Friendly abilities (heals)
  • Damage/Healing over time abilities
  • Constant effects (increases your speed by x for y seconds)

Starting with single target abilities, I thought perhaps I would just do it all in one class (Ability). A TacticalEntity (my movable player and zombie object) would get a list of abilities that one could fire at will (zombies through their AI, and players through some GUI). What is so wrong with this approach? For starters, it would would fine for a single target or aoe instant ability,  but how would a single method on a single instance of an ability apply a damage over time effect (e.g. Does 10 damage per second for 4 seconds)? You have to apply something to an entity and have it stick: AbilityEffect.

That was the biggest revelation for me: separate the responsibility of applying effects and the actual damage/healing effect to a different object altogether. Ok great… now I can just put a collection of effects on a TacticalEntity and an ability can apply effects to the entity! Wait… who is responsible for removing the effects once they expire? For that matter, who is responsible for keeping track of all the effects?

Of course the effect could probably have handled all of this, and the entity itself could have removed effects from itself when they expire, but that’s not the responsibility of the TacticalEntity. It already has a lot of code and does enough. That is where EffectManager comes along. It’s a static class that has an Activity() method that gets called every frame, and it gets the honor of keeping track of a Dictionary<TacticalEntity, List> which holds all effects that are applied to all TacticalEntities.

In both of the examples above, adhering to the Single Responsibility Principle drove me to make decisions which keep my code more concise and maintainable. Any time you hear yourself saying you can use a single class to do multiple different things, you should ask yourself if it would work better split into separate responsibilities.

I haven’t done an AOE ability yet, but that is all about targetting and figuring out which entities to apply effects to outside of the whole ability/effect/manager classes as described above. Without rambling any further, here is the code as it stands right now:

BasicAbility.cs:

    public class BasicAbility : IAbility
    {
        public BasicAbility(List effects)
        {
            Effects = effects;
        }

        public void execute(TacticalEntity source, List destination)
        {
            foreach (TacticalEntity entity in destination)
            {
                execute(source, entity);
            }
        }

        public void execute(TacticalEntity source, TacticalEntity destination)
        {
            Projectile projectile = Factories.ProjectileFactory.CreateNew();
            projectile.Position = source.Position;
            projectile.SourceEntity = source;
            projectile.TargetEntity = destination;
            projectile.SpriteAnimate = true;
            projectile.Ability = this;
        }

        public List Effects
        {
            get;
            private set;
        }
    }

AbilityEffect.cs:

    public class AbilityEffect : IAbilityEffect
    {
        private bool ConstantEffectsApplied = false;

        public AbilityEffect()
        {
            ConstantEffectsApplied = false;
        }

        public AbilityEffect(bool tickImmediately, int healthPerTick, int speedEffect, int defenseEffect, float aggroRadiusEffect, int strengthEffect, int totalticks, string name)
        {
            TickImmediately = tickImmediately;
            HealthEffectPerTick = healthPerTick;
            SpeedEffectWhileActive = speedEffect;
            DefenseEffectWhileActive = defenseEffect;
            AggroRadiusEffectWhileActive = aggroRadiusEffect;
            StrengthEffectWhileActive = strengthEffect;
            TotalTicks = totalticks;
            Name = name;
            ConstantEffectsApplied = false;
        }

        public AbilityEffect(TacticalEntity source, TacticalEntity affectedEntity, IAbilityEffect that)
            : this(that.TickImmediately, that.HealthEffectPerTick, that.SpeedEffectWhileActive, that.DefenseEffectWhileActive, that.AggroRadiusEffectWhileActive, that.StrengthEffectWhileActive, that.TotalTicks, that.Name)
        {
            AffectedEntity = affectedEntity;
            SourceEntity = source;
        }

        public int HealthEffectPerTick
        {
            set;
            get;
        }

        public int SpeedEffectWhileActive
        {
            set;
            get;
        }

        public int DefenseEffectWhileActive
        {
            set;
            get;
        }

        public float AggroRadiusEffectWhileActive
        {
            set;
            get;
        }

        public int StrengthEffectWhileActive
        {
            set;
            get;
        }

        private int _TotalTicks;
        public int TotalTicks
        {
            get
            {
                return _TotalTicks;
            }
            private set
            {
                _TotalTicks = value;
                TicksRemaining = value;
            }
        }

        public int TicksRemaining
        {
            get;
            private set;
        }

        public string Name
        {
            get;
            set;
        }

        public bool Active
        {
            get
            {
                return TicksRemaining > 0;
            }
        }

        public TacticalEntity AffectedEntity
        {
            get;
            set;
        }

        public void ApplyConstantEffects()
        {
            AffectedEntity.strengthEffects += StrengthEffectWhileActive;
            AffectedEntity.defenseEffects += DefenseEffectWhileActive;
            AffectedEntity.speedEffects += SpeedEffectWhileActive;
            AffectedEntity.aggroCircleRadiusEffects += AggroRadiusEffectWhileActive;
        }

        public void RemoveConstantEffects()
        {
            AffectedEntity.strengthEffects -= StrengthEffectWhileActive;
            AffectedEntity.defenseEffects -= DefenseEffectWhileActive;
            AffectedEntity.speedEffects -= SpeedEffectWhileActive;
            AffectedEntity.aggroCircleRadiusEffects -= AggroRadiusEffectWhileActive;
        }

        public void ApplyEffectTick()
        {
            if (!ConstantEffectsApplied)
            {
                ApplyConstantEffects();
                ConstantEffectsApplied = true;
            }
            if (Active)
            {
                AffectedEntity.health += HealthEffectPerTick;
                --TicksRemaining;
            }
        }

        public IAbilityEffect Clone(TacticalEntity source, TacticalEntity entity)
        {
            return new AbilityEffect(SourceEntity, entity, this);
        }

        public TacticalEntity SourceEntity
        {
            get;
            private set;
        }

        public bool TickImmediately { get; set; }
    }

EffectManager.cs:

    public static class EffectManager
    {
        private static Dictionary<TacticalEntity, List> entityEffects = new Dictionary<TacticalEntity, List>(20);
        private static double lasttick = TimeManager.CurrentTime;

        public static void AddEffectsToEntity(TacticalEntity source, TacticalEntity entity, List effects)
        {
            if (!entityEffects.ContainsKey(entity))
            {
                entityEffects.Add(entity, new List(effects.Count));
            }
            foreach (IAbilityEffect effect in effects)
            {
                AddEffectToEntity(source, entity, effect);
            }
        }

        public static void AddEffectToEntity(TacticalEntity source, TacticalEntity entity, IAbilityEffect effect)
        {
            IAbilityEffect newEffect = effect.Clone(source, entity);
            List effects;
            if (!entityEffects.ContainsKey(entity))
            {
                effects = new List(1);
                entityEffects.Add(entity, effects);
            }
            else
            {
                effects = entityEffects[entity];
            }
            effects.Add(newEffect);

            if (newEffect.Active && newEffect.TickImmediately)
            {
                newEffect.ApplyEffectTick();
            }
        }

        public static void Activity()
        {
            if ((TimeManager.CurrentTime - lasttick) > 1.0)
            {
                lasttick = TimeManager.CurrentTime;
                foreach (KeyValuePair<TacticalEntity, List> pair in entityEffects)
                {
                    TickAndRemoveInactiveEffects(pair);
                }
            }
        }

        private static void TickAndRemoveInactiveEffects(KeyValuePair<TacticalEntity, List> pair)
        {
            foreach (IAbilityEffect effect in pair.Value)
            {
                effect.ApplyEffectTick();
            }

            for (int x = pair.Value.Count - 1; x >= 0; --x)
            {
                if (!pair.Value[x].Active)
                {
                    pair.Value[x].RemoveConstantEffects();
                    pair.Value.RemoveAt(x);
                }
            }
        }
    }

And the call to attack another entity:

        public virtual void attack(TacticalEntity attackableEntity)
        {
            if (this.currentAbility != null &&
                this.attackCircle.CollideAgainst(attackableEntity.hitCircle))
            {
                this.currentAbility.execute(this, attackableEntity);                
            }
        }

Manic coder on a mission: FlatRedBall game engine, Tiled Map Editor, and you.

I’ve been busy. Like a manic coder on a mission busy, and I love it. I haven’t been this driven about anything in a long time, and I am loving every second of it.

About a month ago, Joel Martinez introduced me to FlatRedBall. I posted a quick blog post about it, but I did not do justice to this amazing engine. Not only is the engine the best thing since sliced bread (I will never do game development without it ever again), but the people there are AMAZING.

Let’s talk for a second about Victor Chelaru. Where do I begin? He created FlatRedBall on a philosphy. Basically, he was touched by the thorough, selfless help of a random stranger on the gamedev.net forums named Teej, and he uses that experience to drive development efforts at FRB. I am seriously impressed with his work ethic and desire to help others. That’s enough about Vic and FRB, though. As I said, I have been busy.

I have been working on an X-Com type game for a few months now, mostly prior to finding out about FRB. I had an extremely simple prototype that was unbelievably complex under the covers when I wrote it straight up with XNA. FlatRedBall nearly trivializes the work I did there, but that is a good thing. With the FRB engine at my disposal, I can just whip up an isometric tile map, slap a NodeNetwork on it, and have a sprite pathfinding to its heart’s content in 2.5d. Well, now I can, but there was a major piece missing when I arrived to all the FRB goodness.

So I filled the gap. I wrote a Tiled Map Editor to FlatRedBall toolkit which I am now calling TMXGlue.

Basically what this tool does is plug right into Glue (I will write about glue eventually I’m sure) and allow you to add tiled maps directly to your game and have them displaying immediately. Not only is there support for orthogonal and isometric maps and tilesets, but both types work with object layers in TMX for collision and node network generation, generating NodeNetwork and ShapeCollection which you can use in game for A* pathfinding and collisions however you want.

I am constantly working to improve this system, so please leave comments here if you have anything you would like to contribute, see added, or if you find anything completely broken. You can also drop me a note on twitter if you prefer to.

FlatRedBall game engine

In speaking with Joel Martinez about my most recent game project, we got to the point of talking about finishing a game, and he mentioned something about animation. A friend of mine had developed part of a game engine I had always planned to use which had a pretty intuitive and well laid out library. When I mentioned that to him, he pointed me toward FlatRedBall.com.

I was blown away.

This game engine / framework has it all:

At first I was feeling a bit like using this type of thing would mean that I was giving up something, like I had lost that “cool” factor or some form of virtual street cred. Developing with flatredball would be nothing more than using some tool where you drag and drop a bunch of things and make a game and claim you’re an indie dev; however, I’ve been reading the wiki a little bit here and there today, and I have to say I am over that hump… Joel said it best:

… I felt exactly like that in 2010 … then I got over it and released 3 games last year 🙂

Ok… I’m sold. now to start playing, or finding the time to play!

Update: I’m having a bit of trouble creating a project with “Glue” at the moment, so we’ll see how it pans out.

New game I’ve been working on

Back in the MS-DOS era, I was a PC gamer. One of my favorites was a cult classic called UFO: Enemy Unknown, which introduced the entire X-COM series. I still play it to this day… it has major replay value. The graphics are cheesy, the plot line is vague at best, but the gameplay is so amazing that it keeps me and many fans coming back for more.

I do not want to “remake” the X-COM games at all… I simply love the idea of their gameplay model: geoscape/battlescape. Basically, as described from the Wikipedia page:

The game takes place within two main views: the Geoscape and the Battlescape.[4] Gameplay begins on January 1, 1999, with the player choosing a location for their first base on the Geoscape screen: a global view representation of Earth as seen from space (displaying X-COM bases and aircraft, detected UFOs, alien bases, and sites of alien activity). The player can view the X-COM bases, make changes to them, equip fighter aircraft, order supplies and personnel (soldiers, scientists and engineers), direct research efforts, schedule manufacturing of advanced equipment, sell alien artifacts to raise money, and deploy X-COM aircraft to either patrol designated locations, intercept UFOs, or send X-COM ground troops to a mission (using transport aircraft)….

Gameplay switches to its tactical combat phase whenever X-COM ground forces come in contact with aliens.[4] In the Battlescape screen the player commands his soldiers against the aliens in an isometricturn-based battle. One of three outcomes is possible: either the X-COM forces are eliminated, the alien forces are neutralised, or the player chooses to withdraw. The mission is scored based on the number of X-COM units lost, civilians saved or lost, aliens killed or captured, and the number and quality of alien artifacts obtained. Troops may also increase in rank or abilities, if they made successful use of their primary attributes (e.g. killing enemies). Instead of experience points, the combatants gain points in skills like Psi or Accuracy, a semi-random amount depending on how much of the action they participated in. In addition to personnel, the player may use unmanned ground vehicles, outfitted with heavy weapons and armour but not gaining experience. Recovered alien artifacts can then be researched and possibly reproduced. Captured live aliens may produce information, possibly leading to new technology including psionic warfare.

I am taking this type of game play and introducing a few different concepts (subject to change):

  1. Class specialization – The type of specialization you see in RPG games, referred to sometimes as the “holy trinity” in MMORPG games such as World of Warcraft. In this scenario, you would have something like the following classes:
    • Defender – Charges into combat, drawing the attention of attackers away from others and absorbing attacks. (your tank)
    • Sniper – Fires from a safe distance doing massive amounts of damage by targeting vital spots (ranged dps)
    • Combat Medic – I haven’t figured out how this guy will heal from a distance and make sense. Psionics perhaps? (healer)
  2. Experience points – As missions are completed, experience points are awarded and soldiers will gain levels.
  3. Talent trees – Enhance natural abilities as you progress in levels, choosing to further specialize. Combat medic specializing in close proximity and self preservation, or a tank who wants to go high damage because you’ve hired two medics.

I have always wanted to be able to play X-COM style games on a portable device, so I am targetting Windows Phone 7, but I am doing all development in a PC screen, and abstracting all platform specific elements so it should be fairly easy to market it as a PC game if there is any interest.

I have a history of unfinished prototypes. I don’t want to do that anymore, which is why my 2012 new year’s resolution is to finish this thing. This is one game idea that should hold my attention. There is plenty to do, and I have a solid vision for how the game mechanics will work, give or take a few features. Please leave a comment if you are interested!

 

Putting the IGameActionManager interface to work

Things have been pretty hectic for me lately. I have 3 week old twins and barely enough time to do much of anything for myself. My coding comes in bits and pieces here and there, but I have managed to make some strides on a game I have been working on. I am not ready to announce anything regarding it yet as it’s still just a prototype, but if it starts coming together to a point where I think it will be seen to fruition, I will definitely share.

To the topic at hand!

I posted some time ago prior to the Windows Phone 7 initial release about an idea I had for managing input between devices, and simplifying input management. Here is the post. I knew that the theory was sound, but I recently got to put it to the test. I have been developing a game prototype as a Windows project, knowing that XNA is cross platform. The plan has always been to release the game for the Windows Phone primarily, but I want to be able to run and debug on the PC for simplicity, so I have been doing so with the hope that some time I would port the thing to WP7. Recently I did just that, and that really gave my IGameActionManager abstraction a workout. I believe I have finally ironed out the best pattern to mimic the previous/current state of the KeyboardState and MouseState objects while keeping the game code focused, simple, and unencumbered by platform specific concerns.

I submit to you the results:

The interface becomes very simple:

public interface IGameActionManager
{
	GameActionState GetState();
}

Notice how it returns a new class GameActionState. That class is a read only data object that defines the state of input at any given frame. Here it is:

public class GameActionState
{
	public GameActionState(bool isJumping, 
	Vector2 motion, 
	Vector2? cameraVelocity,
	bool isIdle, 
	bool isFiring, 
	Vector2? screenSelectionType1, 
	Vector2? screenSelectionType2, 
	bool isQuitting, float zoomChange,
		Point? indicatorLocation)
	{
		_isJumping = isJumping;
		_motion = motion;
		_cameraVelocity = cameraVelocity;
		_isIdle = isIdle;
		_isFiring = isFiring;
		_screenSelectionType1 = screenSelectionType1;
		_screenSelectionType2 = screenSelectionType2;
		_isQuitting = isQuitting;
		_zoomChange = zoomChange;
		_indicatorLocation = indicatorLocation;
	}

	private Point? _indicatorLocation;
	public Point? IndicatorLocation
	{
		get
		{
			return _indicatorLocation;
		}
	}

	private float _zoomChange;
	public float ZoomChange
	{
		get
		{
			return _zoomChange;
		}
	}

	private bool _isJumping;
	public bool IsJumping
	{
		get
		{
			return _isJumping;
		}
	}

	private Vector2 _motion;
	public Vector2 Motion
	{
		get
		{
			return _motion;
		}
	}

	private bool _isIdle;
	public bool IsIdle
	{
		get
		{
			return _isIdle;
		}
	}

	private bool _isFiring;
	public bool IsFiring
	{
		get
		{
			return _isFiring;
		}
	}

	private Vector2? _screenSelectionType1;
	public Vector2? ScreenSelectionType1
	{
		get
		{
			return _screenSelectionType1;
		}
	}

	private Vector2? _screenSelectionType2;
	public Vector2? ScreenSelectionType2
	{
		get
		{
			return _screenSelectionType2;
		}
	}

	private bool _isQuitting;
	public bool IsQuitting
	{
		get
		{
			return _isQuitting;
		}
	}

	private Vector2? _cameraVelocity;
	public Vector2? CameraVelocity
	{
		get
		{
			return _cameraVelocity;
		}
	}
}

I made note of the fact that I used digital rather than analog values for some fields like IsMovingUp, IsJumping, etc, where it might be beneficial to use their analog equivalent. Here I have illustrated a partial change to that mindset.

Notice the Vector2 returned for the Motion property.. Basically, the thought here is that in a single Vector, I can return the direction and magnitude of motion instead of relying on some speed modifier and 1 dimensional digital booleans. In the concrete implementations (GameActionManagerWindows and GameActionManagerWindowsPhone), you can use anything to specify this value. For instance, as you will see in my implementations later in this post, I use the directional arrows on the keyboard for the motion vector, but in the phone implementaion I poll for the the FreeDrag GestureType. The important thing to note here is that the game code doesn’t care what you use. It just grabs the motion property off the GameActionState and uses it, oblivious to where that value came from (Side note: I always find it interesting how we talk about code as if it thinks or has a personality. I have seen this from so many developers, myself included and it’s somewhat fascinating to me.)

The other things I want to point out before moving on to pasting the implementations for windows and wp7 are these two properties:

private Vector2? _screenSelectionType1;
public Vector2? ScreenSelectionType1
{
	get
	{
		return _screenSelectionType1;
	}
}

private Vector2? _screenSelectionType2;
public Vector2? ScreenSelectionType2
{
	get
	{
		return _screenSelectionType2;
	}
}

Notice the Vector2? (aka Nullable) return type. That allows me to specify to the Game basically two different states for the game action: A selection was made (with coordinates) or no selection was made (null). In an otherwise non nullable value type (struct), Nullableis a great way to virtually turn that into a type which can have null assignments. It’s often useful in certain situations such as this where specifying some special value for the actual Vector2 wouldn’t help, because it’s also valid input (i.e. I can’t just use Vector.Zero as a way to specify that no selection was made, because Vector.Zero is a completely valid selection). Vector2(-1, -1) also just seemed strange to use since I’m not really sure what the coordinate system should look like, and there is the perfectly viable Nullable<> wrapper to use, so why not!

On to the implementations!

Windows/PC:

public class GameActionManagerWindows : IGameActionManager
{
	public GameActionState GetState()
	{
		KeyboardState keyboardState = Keyboard.GetState();
		MouseState mouseState = Mouse.GetState();
		Vector2? selectionType1 = null;
		Vector2? selectionType2 = null;

		if (mouseState.LeftButton == ButtonState.Pressed)
		{
			selectionType1 = new Vector2(mouseState.X, mouseState.Y);
		}

		if (mouseState.RightButton == ButtonState.Pressed)
		{
			selectionType2 = new Vector2(mouseState.X, mouseState.Y);
		}

		bool isJumping = keyboardState.IsKeyDown(Keys.Space);
		Vector2 motion = Vector2.Zero;

		if (keyboardState.IsKeyDown(Keys.Right))
		{
			motion.X += 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Left))
		{
			motion.X -= 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Up))
		{
			motion.Y -= 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Down))
		{
			motion.Y += 1.0f;
		}

		bool isFiring = mouseState.RightButton == ButtonState.Pressed;

		bool isIdle = !keyboardState.IsKeyDown(Keys.Up) &&
			!keyboardState.IsKeyDown(Keys.Down) && !keyboardState.IsKeyDown(Keys.Left) &&
			!keyboardState.IsKeyDown(Keys.Right) && !keyboardState.IsKeyDown(Keys.Space);

		bool isQuitting = keyboardState.IsKeyDown(Keys.Escape);

		bool isZoomingIn = keyboardState.IsKeyDown(Keys.OemPlus);
		bool isZoomingOut = keyboardState.IsKeyDown(Keys.OemMinus);

		return new GameActionState(false, motion, Vector2.Zero,
			isIdle, isFiring, selectionType1, selectionType2, isQuitting,
			isZoomingIn ? 1.0f : isZoomingOut ? -1.0f : 0.0f,
			new Point(mouseState.X, mouseState.Y));
	}
}

I chose to use the keyboard plus/minus to zoom, but you could just as easily use the mouse scroll wheel here by polling the MouseState.

The WP7 Implementation:

#if WINDOWS_PHONE
using Microsoft.Xna.Framework.Input.Touch;
#endif

#if WINDOWS_PHONE
    public class GameActionManagerWindowsPhone : IGameActionManager
    {
        Vector2 MaxCameraVelocity = new Vector2(5f, 5f);
        public GameActionState GetState()
        {

            TouchPanel.EnabledGestures =
                GestureType.Hold |
                GestureType.Tap |
                GestureType.DoubleTap |
                GestureType.FreeDrag |
                GestureType.Flick |
                GestureType.Pinch;

            // we use raw touch points for selection, since they are more appropriate
            // for that use than gestures. so we need to get that raw touch data.
            TouchCollection touches = TouchPanel.GetState(); 

            Vector2? selectionType1 = null;
            Vector2? selectionType2 = null;

             // next we handle all of the gestures. since we may have multiple gestures available,
            // we use a loop to read in all of the gestures. this is important to make sure the
            // TouchPanel's queue doesn't get backed up with old data
            float zoomChange = 0.0f;
            Vector2 motion = Vector2.Zero;
            Vector2? cameraVelocity = null;

            while (TouchPanel.IsGestureAvailable)
            {
                // read the next gesture from the queue
                GestureSample gesture = TouchPanel.ReadGesture();

                switch (gesture.GestureType)
                {
                    case GestureType.Pinch:
                        // get the current and previous locations of the two fingers
                        Vector2 a = gesture.Position;
                        Vector2 aOld = gesture.Position - gesture.Delta;
                        Vector2 b = gesture.Position2;
                        Vector2 bOld = gesture.Position2 - gesture.Delta2;

                        // figure out the distance between the current and previous locations
                        float d = Vector2.Distance(a, b);
                        float dOld = Vector2.Distance(aOld, bOld);

                        // calculate the difference between the two and use that to alter the scale
                        zoomChange = (d - dOld) * .015f;

                        // Allow dragging while pinching by taking the average of the two touch points' deltas
                        motion = (gesture.Delta + gesture.Delta2) / 2;
                        break;
                    case GestureType.FreeDrag:
                        motion = gesture.Delta;
                        cameraVelocity = Vector2.Zero;
                        break;
                    case GestureType.Hold:
                        selectionType2 = gesture.Position;
                        break;
                    case GestureType.Tap:
                        selectionType1 = gesture.Position;
                        break;
                    case GestureType.Flick:
                        cameraVelocity = Vector2.Clamp(gesture.Delta, -MaxCameraVelocity, MaxCameraVelocity);
                        break;
                }
            }

            return new GameActionState(false, motion, cameraVelocity, false, false, selectionType1, selectionType2, false, zoomChange, Point.Zero);
        }
    }
#endif

I separated that into two #if blocks since it’s the using statement at the top and also the class itself that contains windows phone specific code. The #if blocks are only there to ensure that the windows project can still compile with the phone specific code in it, since the WINDOWS_PHONE directive will not be defined in that project, and therefore the compiler won’t use that code.

And the grand finale… the Game code in all its simplified glory!

To define the proper GameActionManager, in the Initialize() method of the Game class:

#if WINDOWS_PHONE
            gameActionManager = new GameActionManagerWindowsPhone();
// You could do the else if here for XBOX, but I am not targeting XBLIG yet
#else
            gameActionManager = new GameActionManagerWindows();
#endif

And the Update() method:

        protected override void Update(GameTime gameTime)
        {
            gameActionState = gameActionManager.GetState();

            if (gameActionState.IsQuitting)
            {
                this.Exit();
            }

            updateCamera();
            updateUnits();
            if (bullet.alive)
            {
                bullet.currentPosition += bullet.velocity;
            }
            handleSelections();
            handleFiring();

            previousGameActionState = gameActionState;

            base.Update(gameTime);
        }

private void updateCamera()
        {
            if (camera.Velocity.X != 0.0f || camera.Velocity.Y != 0.0f)
            {
                camera.Update();
            }

            if (gameActionState.CameraVelocity != null)
            {
                camera.Velocity = gameActionState.CameraVelocity.Value;
            }

            camera.Pos -= gameActionState.Motion;
            camera.Zoom += gameActionState.ZoomChange;
        }

Notice particularly the updateCamera() method… Camera is a Camera2D class I used for the 2D camera, which we don’t really need to talk about… maybe I’ll do another post on that, but regardless, the Pos and Zoom properties of the camera object cause transformations on the draw calls which give the appearance of a camera moving around. The gameActionState.Motion property is used, which was defined via polymorphism on the GameActionManager class’ GetState() function override as specified in the Initialize method’s declaration of the GameActionManager variable.

Good stuff… the best part about this whole idea is that because the platform specific bits are abstracted out, the Game code itself isn’t concerned with anything to do with the platform. The Game code is now completely portable across multiple devices, and is future proof!

This blog was mentioned in XNA Club Communique 46

I know I read this one, but I completely missed my own name in it:
http://blogs.msdn.com/b/xna/archive/2010/03/18/creators-club-communiqu-46.aspx

Pretty amazing, and I only saw it thanks to a pingback. Now I have to get back into XNA… I miss it and I haven’t posted anything here since March. I think that’s when I started playing WoW again.

A coworker recently shared a game called Chain Reaction with me that I think would translate well into XNA. Better yet, the graphics are easy to build from scratch, which tends to be one big reason I abandon projects. I’m not even a shred of an artist or digital music producer (I can lay tracks, but they aren’t good), so there is a lack of content there that makes indie dev difficult for me. I might give this game a go and see how it turns out.