Unit test structure guidelines

I have recently started to structure my unit tests differently and have to share. Prior to this new convention, things were messy. Let’s look at an example.

Let’s say there’s a class you need to test called WidgetController. This controller is supposed to be responsible for inserting, updating, and deleting Widgets. It has those 3 actions on it:

public class WidgetController
{
    public void Insert(Widget widget)
    {
        // Do something
    }
    public void Update(Widget widget)
    {
        // Do something
    }
    public void Delete(int widgetId)
    {
        // Do something
    }
}

Without the new method of unit test structure, I might have tested it this way:

[TestClass]
public class WidgetControllerTests
{
    [TestMethod]
    public void when_inserting_someassertion()
    {
    }

    [TestMethod]
    public void when_updating_someassertion()
    {
    }

    [TestMethod]
    public void when_deleting_someassertion()
    {
    }
}

This has the benefit of only using one class for all of the WidgetController tests, but this class can quickly grow. The new method I use:

public class WidgetControllerTests
{
    [TestClass]
    public class InsertTests
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }

    [TestClass]
    public class UpdateTests
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }

    [TestClass]
    public class DeleteTests()
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }
}

With one assertion made, this looks cumbersome, but imagine when you have 15 different mini assertions for an insert test. Perhaps your business rules are fairly complex, and you need to verify different aspects through many different unit tests. This gives you the structure and separation to really get some good organization for your tests. The test UI and any test result reporting suites will be able to break your tests down by subclass, giving you a much better idea of what’s going on.

10 reasons to use TDD (Test Driven Development)

  1. It will help you improve your OOD

    Proper object oriented design is the key to writing good extendable, maintainable, and stable software. If you pile too much functionality into one big class or one big method, you’re just asking for trouble. TDD makes it easier to adhere to the SRP (Single Responsibility Principle) by encouraging you to create smaller classes with less functionality.

  2. Get it in front of your users faster

    Designing your classes to inherently rely on abstract (mockable) dependencies is a sure way to get on the fast track to building a demo. For example, in a database driven application, a mock data layer can be substituted with generated data to be consumed by the front end. Though somewhat unorthodox, mocking frameworks can work just as well to fake a back end system in a running application as they can for unit test mocking! In software development, getting the product in front of your users can be the most important step, because it gives them a chance to change requirements early when it’s less painful.

  3. Good coverage

    TDD will have you writing a test for every bit of functionality you are coding. You are automatically forcing yourself to have a high code coverage metric if you stick to the cadence: Write breaking test, fix test, refactor.

  4. Quickly verify all functionality

    When refactoring code, it is very useful to be able to quickly verify all existing functionality in an instant. This isn’t necessarily a benefit of TDD itself, but rather having good unit test coverage of business rules. Sometimes in an emergency situation (e.g. an outage due to code bug), we are forced to cut corners and not necessarily write a test first. Having that safety net of unit tests there is a huge confidence booster. Naturally by developing with TDD, you are building a safety net as you go!

  5. It forces you to rely only on abstractions (Dependency Inversion Principle).

    You wouldn’t solder a lamp directly to the electrical wiring in a wall, would you?

    The dependency inversion principle is one of the 5 SOLID principles of object oriented design. It states that when designing a class, any and all of the other classes that are used should be via abstractions. That is to say, a class should not reference another concrete type. When done correctly, test driven development encourages adherence to this principle, because you will always need something to mock when writing a test.

  6. Smaller problems are easier to solve.

    (2 + 2) / 2 – 2 * 6 = ?

    If you understand the order of operations (if you’re here, I’m sure you do), then your brain automatically broke this equation down into solvable parts. Likely, you figured out that 2+2 is 4, and 2 * 6 is 12, so the equation became 4/2 – 12. Then you might just solve 4/2 and finish out with -10. The point is that you broke the larger problem down into smaller chunks, because that’s the easiest way to get to the answer. (Bonus points if you can just look at equations like that and spit out an answer!). Any programmer worth their salt isn’t going to attack a large application by writing one big blob of code. They’re going to understand what the customer wants, break it down into pieces, and build those pieces to fit together for the larger system. TDD is a great way to do just that without completely understanding the big picture immediately.

  7. It feels really good

    I’ve done quite a few projects with TDD now. The first time it feels strange, like it can’t possibly work. You toil for a few days or weeks on writing these individual little bits, solving little tiny problems as you go. The larger problem is not necessarily in your brain the entire time, so it feels foreign. Finally, when it comes time to make a demo, you get to connect all the little pieces, which almost always involves an IoC container for me. This is a very satisfying process and brings me a lot of joy.

  8. Opens up the path for future testing

    This is a topic I have talked about at length. Some may not see the value in this immediately, but I find this extremely important. Simply by following the TDD pattern, you are ensuring future testability of your classes. No one writes bug-free code every time. I can’t tell you how many times I have been seriously happy when it comes time to fix a bug in code that I’ve used TDD with. I come back to find that in order to reproduce the bug, I just have to provide a very specific mock up in a new unit test. The path was laid by me in the past, and now it is super easy to prove that the bug is fixed by fixing a failed unit test.

  9. Stops analysis paralysis

    From Wikipedia:

    Analysis paralysis or paralysis of analysis is an anti-pattern, the state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken, in effect paralyzing the outcome.

    Sure, any new application needs some analysis, but when the above happens, nothing gets done. TDD allows you to get started right away by solving small problems immediately. Sometimes, the bigger picture starts to come together when you start chipping away at the little bits.

  10. Slow is smooth, and smooth is fast

    This is an old saying applied to targeting a firearm. The saying explains that if you move too fast, you’re going to make a mistake and fail. I believe the same saying can be applied to software development as well. The argument against TDD and unit tests in general that I’ve heard in the past is that they slow you down. It’s a natural thought to have: I can either start writing code to solve the problem, or start writing code to test non-existent code, and then write the same code to solve the problem anyway.

    WRONG!

    This argument infuriates me, because it typically comes from someone of power who is trying to justify cutting corners. Sure, if you put two people on solving the same complex problem, one with TDD and one hacking directly towards a solution, the latter is going to finish quicker, but that’s just not a real life scenario long term. With any given application, someone is going to need changes. While TDD might take longer in the initial phases, it pays dividends extremely quickly when changes start pouring in. The class design is decoupled, class responsibilities are seriously limited, so requirements changes very rarely actually mean changing working code! Instead, it is safer and quicker to extend and write new code. Less bugs are created, and new features can be added a lot quicker in the long run.

Why using a generic repository pattern in Entity Framework still works!

I have had lengthy conversations on the topic of using a generic repository pattern on top of Entity Framework (I’m kainazzzo). I believe that I am probably in a minority of developers that think it’s a perfectly acceptable practice. Here are some of the arguments people make about using IRepository<T>:

  1. It couples your class directly to Entity Framework, making it difficult to switch
  2. There is no value-add using a generic repository over top of EF, because it is already a Repository pattern
    • i.e. DbSet<Entity> is the Repository, and DbContext is the UnitOfWork
  3. It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code
    • Or that in general, IQueryable should not be returned from a repository, because deferred queries are leaky

I will address these points directly.

It couples your class directly to Entity Framework, making it difficult to switch

The last time I changed data access in running production code was never. There would have to be an astronomically large gulf in the functionality between two ORM frameworks for this to even be a consideration. Once code is in production and being used by real live people, any change is risk. Risk is not taken unless the change is deemed to be of some value. Simply changing ORM or data access frameworks would be a tough sell to any business unit. Once you choose Entity Framework, you will not change to NHibernate. It is just not going to happen, so this is an difficult argument to make in my opinion. If you really have a problem with it, you can still do a higher level implementation on top of EFRepository that abstracts away the rest of the EF bits.

There is no value-add using a generic repository over top of EF, because it is already a Repository pattern

BOLOGNA SANDWICH. Seriously… now I’m hungry for processed meat.

What I mean is that this is preposterous. Take my IRepository<T> class for example:

public interface IRepository<T>
    {
        void InsertOrUpdate(T entity);
        void Remove(T entity);
        IQueryable<T> Find(Expression<Func<T, bool>> predicate);
        IQueryable<T> FindAll();
        T First(Expression<Func<T, bool>> predicate);
        T FirstOrDefault(Expression<Func<T, bool>> predicate);
    }

And the EF implementation:

    public class EFRepository<T> : IRepository<T>
        where T : class, IEntity
    {
        private readonly IUnitOfWork<T> _unitOfWork;
        private readonly DbSet<T> _dbSet;

        public EFRepository(IUnitOfWork<T> unitOfWork)
        {
            _unitOfWork = unitOfWork;
            _dbSet = _unitOfWork.DbSet;
        }

        public void InsertOrUpdate(T entity)
        {
            if (entity.Id != default(int))
            {
                if (_unitOfWork.Entry(entity).State == EntityState.Detached)
                {
                    _dbSet.Add(entity);
                }
                _unitOfWork.Entry(entity).State = EntityState.Modified;
            }
            else
            {
                _dbSet.Add(entity);
                _unitOfWork.DbSet.Add(entity);
            }
        }

        public void Remove(T entity)
        {
            _dbSet.Remove(entity);
        }

        public IQueryable<T> Find(Expression<Func<T, bool>> predicate)
        {
            return FindAll().Where(predicate);
        }

        public IQueryable<T> FindAll()
        {
            return _dbSet;
        }

        public T First(Expression<Func<T, bool>> predicate)
        {
            return FindAll().First(predicate);
        }

        public T FirstOrDefault(Expression<Func<T, bool>> predicate)
        {
            return FindAll().FirstOrDefault(predicate);
        }
    }

How many times have you looked up how to do “Insert or Update” in EF? I don’t have to do that any more. I know my repository does it well, and all I have to do is make my entities implement IEntity, which simply ensures that an Id field exists. This is not a problem with DB first implementations, since EF generates classes as partials. My value-add comes from EFRepository implementing basic Id checking in order to initiate a proper update in EF.

Then, there is unit testing. Some people would argue the value-add of unit tests, but I see unit tests as a priceless artifact one puts on display and protects. They are a window into your code and how it is supposed to function, and they can be a huge safety net. See my post on TDD for more information. Let’s say you were developing a game data editor and you had a class that relied on loading Enemies (EnemyEditor), perhaps one of the simple requirements was that when loading enemies (GetEnemies()), the list of Abilities was not null, even if the database provided it as such:

[TestMethod]
public void enemies_have_non_null_ability_list()
{
    var container = new MockingContainer<EnemyEditor>();
    container.Arrange<IRepository<Enemy>>(r => r.FindAll()).Returns(
        new List<Enemy>
        {
            new Enemy
            {
                Name = "Enemy1",
                Abilities = null
            }
        });

    var enemies = container.Instance.GetEnemies();

    Assert.IsNotNull(enemies[0].Abilities);
}

This unit test is for a very small requirement, and the arrangement of what IRepository<Enemy> returned was extremely easy to write (if you’re familiar with mocking). You didn’t have to jump through any hoops to mock up the DbSet within the DbContext by creating IDbContext or anything, which admittedly is not impossible, but it isn’t entirely intuitive, all apt alliteration aside.

So there is value in layering on top of Entity Framework with a generic repository pattern. The value is in InsertOrUpdate, as well as simplified unit test mocking.

It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code

Given the value that it adds, I am totally alright with this. Yes, IQueryable.Include “leaks” implementation details through the abstraction layer, because .Include() only exists in Entity Framework; however, given that I already stated the fact that the ORM will never change, I am alright with making a conscious decision NOT to abstract away the ORM. I am not abstracting away Entity Framework… it is already an abstraction of data access that I am comfortable wtih. I am actually EXTENDING Entity Framework by adding a generic repository pattern on top of it. It is much simpler to deal with IRepository<T> and IUnitOfWork<T> than to have some GameEntities override that mocks the DbSet objects inside. Also, I will say it again: you can still use the generic repository pattern as a starting point, and further abstract away the EF specific bits by using a more domain-centric data access object.

It is my experienced opinion that software development is always about trade-offs. There is never a one-size-fits-all solution, but there are patterns that get us as far as 99% of the way there. It’s up to us as software craftspeople to massage and mold code into a product that works and is not a burden to maintain.

I leave you with two quotes from Bruce Lee:

“Don’t think, Feel, it is like a finger pointing out to the moon, don’t concentrate on the finger or you will miss all that heavenly glory.”

To me, coming up with reasons not to do the most obvious, simply because it is not exactly like you think it should be is like looking at the finger.

“All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns.”

Without adapting the known repository pattern to fit the Entity Framework appropriately, we limit ourselves to something that is not ideal for the given circumstance.

 

Test Driven Development (TDD) vs. Traditional Testing

TDD can get you pretty far, but integration testing is necessary.

The subject of this post is a bit of a misnomer, because the two are not mutually exclusive. That is to say, test driven development is not a replacement for testing. In fact, test driven development has less to do about testing than it does about design. TDD should drive your class design in such a way that makes it easier to get to the real testing phase. There are certain cases that are not going to be apparent during the initial class design, so when developing any application, testing should begin as soon as possible. TDD Can get you there faster, because a lot of the pieces of the application can be faked since the design is testable!

Anything that gets mocked in a unit test can be faked in a built and deployed application, so business users, UX specialists, and designers will get a chance to play with the app to tweak requirements very early in the process! This early change is a lot less risky than late change when an application is “done” because the whole system is less complex.

Take for example my most recent project for which I am using TDD. It is a gesture library for desktop games using XNA (not windows 8 store games). I created a GestureProvider object which relied on an ITouchEventProvider interface to receive raw TouchEvent objects and return GestureSample objects. Using just four conceived objects, I wrote some simple tests that would prove Tap, Drag, and Pinch gestures could be detected given the proper touch events.

The tap test went something like…

[TestMethod]  
public void single_tap_registers_from_one_touch()  
{  
    // Given a mock ITouchEventProvider that returns the following
    _container  
        .Arrange<ITouchEventProvider>(p => p.Events)  
        .Returns(new List  
        {  
            new TouchEvent  //    touch down at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Down,  
                TimeStamp = DateTime.Now  
            },   
            new TouchEvent  //    touch up at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Up,  
                TimeStamp = DateTime.Now.AddMilliseconds(200.0)  
            }  
        });  

    var gestureProvider = _container.Instance;  

    // Get gestures from the real GestureProvider object (system under test or SUT)
    var samples = gestureProvider.GetSamples();  

    // Assert that there is one GestureSample object for a Tap at 0,0 
    var gestureSamples = samples.ToList();  
    Assert.AreEqual(1, gestureSamples.Count);  

    var tap = gestureSamples[0];  
    Assert.AreEqual(Vector2.Zero, tap.Delta);  
    Assert.AreEqual(Vector2.Zero, tap.Delta2);  
    Assert.AreEqual(Vector2.Zero, tap.Position);  
}

I did that test, and another for Drag and Pinch. Everything seemed to be going so well that I wanted to test them out, because I had a sneaking suspicion that I was missing something. I wrote up a quick test for a real ITouchEventProvider implementation that would use an interop library to listen for events, and provide them to the GestureProvider. I fired up a real game and added the necessary code to use the GestureProvider. I noticed one thing right away: Tap was not registering as a tap, but instead it was a drag. I double checked my tests, and it all looked ok, so I had to debug a bit. Eventually I found that my assumption about what events would fire for a tap was flawed. There could be any number of “move” events between “down” and “up”. I made the quick fix to add one move event to the test arrangement and fixed the GestureProvider so that the test passed, and then it worked. This proves that integration testing is a very important step in any system.

My unit test alone did not make the whole system work, but via TDD, I had a designed the classes such that there was a clear path to fix the test so that it satisfied the real-world scenario instead of the erroneous assumption I made. Chalk up another TDD win!

JavaScript and the “this” keyword

One of the things that tripped me up when I first learned JavaScript was the “this” keyword. I came from a pure OO world, where classes were clearly defined objects, and “this” always referred to the object in context. JavaScript is a prototype based scripting language, though, and as such, doesn’t have the concept of a class. The “this” keyword means something very different in JavaScript.

According to the ECMAScript language specification:

The this keyword evaluates to the value of the ThisBinding of the current execution context. 

And the “ThisBinding” keyword is:

The value associated with the this keyword within ECMAScript code associated with this execution context.

So, both definitions refer to the other, which I was always taught in elementary school was not to be done! In my opinion, this definition is rather useless to the new JavaScript developer. Practical examples are much more useful!

In JavaScript in a browser, all global functions are bound to the Window object. You can see the behavior by running this bit of code in any browser’s developer console:

(function () {
    console.log(this); // [object Window]
})();

It’s a self executing function that exists in the global scope, and the console.log(this); line should write out [object Window] or something similar. That’s because “this” is a reference to Window. In a function like this, anything that exists on the Window object is also available through the “this” keyword:

(function () {
    this.alert(0);
})();

Because the alert function exists in the global scope, it is attached to the Window object. A related phenomenon occurs when you declare global variables:

hello = function() {
    alert('hello');
}

(function () {
    this.hello(); // same as calling window.alert('hello');
})();

This is the default behavior, but that’s only when functions are declared, or rather, bound to the global scope when executed. One can easily rebind a function with .call() or .apply():

var newobj = {
    alertit: function (message) {
        alert(message);
    }
};

var func = function (message1, message2) {
    this.alertit(message1 + " " + message2);
};

func.call(newobj, 'hello', 'world'); // alerts hello world
func.apply(newobj, ['hello', 'world']); // alerts hello world

the .call() and .apply() functions are both used to change the “this” keyword binding, or the “ThisBinding” as described in the ECMAScript spec. They do the exact same thing, but they pass parameters into the called function differently. The .call() function takes a dynamic parameter set, and the .apply() function takes an array of parameters. Either way, you can see that ‘func’ is called the same way, with its “this” keyword bound to the newobj object.

In the case where a function is already scoped to an object, the “this” keyword will reference the parent object:

var NewObj = function () {};

NewObj.prototype.alert = function (message) {
    alert('NewObj alert: ' + message);
};

NewObj.prototype.test = function (message) {
    this.alert(message);
};

var newobj = new NewObj();
newobj.test('test'); // alerts NewObj alert: test
newobj.test.call(window, 'test'); // alerts test

In this case, the first newobj.test() is called while using the default binding, which is the instance of NewObj that was created, so when it calls this.alert(), it is calling the object’s prototype.alert() function.

The second time it is called, I forced a rebind to the window object, by using .call(). This causes the normal global alert() function to be the one called, since in this call, “this” is bound to window.

I hope this clears up a common problem for someone. I certainly had a hard time understanding “this” when I first learned JavaScript, but hopefully it makes more sense after seeing a few examples.

Test driven development (TDD) should drive your class design (Part 1 of 2): The wrong way

Consider the following scenario. You work for Company A, and they want you to write a simple web portal for them. You learn that there is a process in place already that spits out about 30 images as email attachments, and they would much rather see these displayed on a web site. The images will have to be categorized into main, sub, and chart type. Sounds simple enough, right?

I want to attack this problem from two angles to show how test driven development can, and should, drive your class design. Again, TDD should drive your class design. Simply put: if you do it right, TDD should cause you to actually change how you design your classes and their dependencies into a much looser, flexible, and less brittle pattern.

First, let’s look at it from how one might write this without TDD. This theoretical person is still good with object oriented programming, but hasn’t quite made the leap to TDD. Maybe you fit this bill? Maybe you’ve said “I don’t even know what this is going to look like, so how would I write a unit test for it before it’s written??” to yourself in the past? Let’s get started.

Obviously from this example, one can imagine a Chart object, and perhaps a ChartCategory object:

    public class Chart
    {
        public string FileName { get; set; }
        public ChartCategory Category { get; set; }
    }

    public class ChartCategory
    {
        public string Main { get; set; }
        public string Sub { get; set; }
        public string Type { get; set; }
    }

One might then surmise that they need a way to load charts from disk:

public class ChartReader
    {
        public IEnumerable LoadChartsFromDisk(string path)
        {
            var files = System.IO.Directory.GetFiles(path);

            var charts = files.Select(f => new Chart
            {
                FileName = f
            }).ToList();

            foreach (var chart in charts)
            {
                chart.Category = GetCategory(chart, path);
            }

            return charts;
        }

        private ChartCategory GetCategory(Chart chart, string path)
        {
            var doc = XDocument.Load(path + "\charts.xml");
            var chartElement = doc.Descendants("chart").FirstOrDefault(x => x.Attribute("filename").Value == chart.FileName);
            if (chartElement != null)
            {
                return new ChartCategory
                {
                    Main = chartElement.Attribute("maincategory").Value,
                    Sub = chartElement.Attribute("subcategory").Value,
                    Type = chartElement.Attribute("charttype").Value
                };
            }
            return new ChartCategory();
        }
    }

GREAT! You even had the foresight to load categories from an XML document. This is going to work well, right? Let’s say you wire this all together and make a great front end website for them (I’m going to skip this part since it’s not all that relevant right now). You have your charts displaying on page, and business likes how it looks. You get a pat on the back for getting this done in a few days.

Now they want to add new features. They want charts to have a description. Come to find out, charts are only ever categorized into two main categories, and two static chart types, and they always want one chart type on the left of the page, and the other on the right. You make these changes. Later down the line, someone comes back and says well there could be a different chart type, and this chart type is actually a grouping of charts that, as a group, have to be shown together.

The point I’m trying to make is that this code is brittle and untestable. One guaranteed point in software development is that the requirements will change over time, and the later in the process you get change, the riskier it is.

Why is this code untestable?

Well, technically it’s not impossible to test this code, but let’s try writing a simple unit test. One that proves a chart gets its category from the xml document:

        [TestMethod]
        public void charts_get_their_categories_from_xml_documents()
        {
            // Arrange
            var reader = new ChartReader();

            // Act
            var charts = reader.LoadChartsFromDisk("c:\unittests").ToList();

            // Assert
            Assert.AreEqual("main", charts.First().Category.Main);
            Assert.AreEqual("sub", charts.First().Category.Main);
            Assert.AreEqual("type", charts.First().Category.Main);
        }

Halfway through this test, it becomes apparent that the ChartReader object has 2 major dependencies: the file system, and the xml document. That is to say, in order to pass this test, the C:unittests folder will have to contain image files and an xml document to simulate the assertions. This causes a few problems:

  1. Disk IO is slow
  2. This test is very brittle, as the arrange section is not really arranging anything.
    1. The test parameters are setup on disk and are easy to modify by anyone
  3. In order to deploy this to any type of build server that runs unit tests, the same filesystem dependencies need to be met.

While this class design got the job done, I hope you can see that it is not ideal, but how can TDD help? Click here to find out!

Unit testing with mocks

Recently I have really gotten into Test Driven Development (TDD). My first real try at TDD was for a library that I have yet to complete for integrating spriter animations into FlatRedBall. Now I am working on something work related, so I can’t give any specific details as it is proprietary information. I can, however, come up with some contrived examples that may be beneficial to the community regarding Mocking.

There are several different mocking frameworks out there, and I started using Moq first, but I have changed to using Telerik JustMock because it has a less verbose syntax. I may post a “differences between Moq and JustMock” some day, but others have already covered that on the web, so it really isn’t necessary.

I also find great value in dependency injection, so IoC containers are invaluable in my production code; however, something I have also recently learned is that they can be just as invaluable in a unit test. Implementing the dependency injection pattern enables techniques such as inversion of control (IoC), which allow specifying concrete dependencies at runtime for abstract dependencies. That is to say, if class A relies on interface I, and B implements I, class C associates A with B at runtime without either class knowing of the other. Example:

public class Foo
{
    private IWorker _iworker;
    public Foo(IWorker iworker)
    {
        _iworker = iworker;
    }

    public string DoWork()
    {
        return _iworker.DoWork();
    }
}

public interface IWorker
{
    string DoWork();
}

public class Bar : IWorker
{
    public string DoWork()
    {
        return "Bar";
    }
}

We have two classes. Foo which has a dependency on an abstraction (interface IWorker). You can see that Foo doesn’t have any reference to class Bar in this example, and the reverse is true. Bar doesn’t know anything about Foo.

One might use this class in the following manner:

// Manually injecting the dependency through a constructor
var foo = new Foo(new Bar());

// or by Automatically injecting the dependency by setting up an IoC container
var container = new UnityContainer();
container.RegisterType<IWorker, Bar>();

// Typically the below step is done by a dependency resolver which depends on a
// container of some sort, but for a concise example:
Foo foo = container.Resolve<Foo>();

I’ll let the comments explain the code. Suffice it to say that the second pattern is much nicer, because a third class is responsible for coupling all of your dependencies. In MVC, this is typically a BootStrapper static class, but it can be anything that sets the MVC configuration’s global DependencyResolver object.

Why should you use this pattern in your class design? Unit tests! Unit tests! Unit tests!

Example:

[TestMethod]
public void a_dowork_returns_dowork_from_IWorker()
{
    // Arrange
    var container = new MockingContainer(); 
    container.Arrange(i => i.DoWork()).Returns("test");

    // Act 
    Foo foo = container.Instance;
    var result = foo.DoWork();

    // Assert
    Assert.AreEqual("test", result);
}

Mind blown? There’s a lot going on here, but essentially:

Arrange:
The unit test takes advantage of the dependency injection pattern and injects a proxy implementation of the IWorker interface into the Foo object it maintains inside the MockingContainer. The call to Arrange is saying that this proxy object should return “test” for any calls to its DoWork function. So we have an object that implements IWorker (it has nothing to do with class Bar), and we inject that into the instance of Foo automatically with the container.

Act:
We then call DoWork on the instance of Foo that the container provides us (which has the dependency injected already)

Assert:
We assert that the result returned from foo.DoWork() matches what the proxy DoWork was returning.

This is a simple contrived example, but it illustrates how IoC containers work in tandem with mocking very well. I hope someone finds this useful!

Separation of responsibilities: Ability, AbilityEffect, and EffectManager

After talking with @Sunflash93 on twitter (find their blog here), I got to thinking I should post about how I designed the ability system in my game Z-Com. See my previous blog post for more info about the game.

In my coding adventures, I like to attempt to stick to the SOLID principles of OOD as outlined here. Namely, my favorite and arguably the easiest of the principles to smell is the Single Responsibility Principle. In short, A class should have one, and only one, reason to change. This can and should be applied to all facets of OO programming, including game development.

When first trying to flesh out the details of how I would do abilities, I knew I wanted a few things:

  • Single target abilities
  • AOE/Multi target abilities
  • Friendly abilities (heals)
  • Damage/Healing over time abilities
  • Constant effects (increases your speed by x for y seconds)

Starting with single target abilities, I thought perhaps I would just do it all in one class (Ability). A TacticalEntity (my movable player and zombie object) would get a list of abilities that one could fire at will (zombies through their AI, and players through some GUI). What is so wrong with this approach? For starters, it would would fine for a single target or aoe instant ability,  but how would a single method on a single instance of an ability apply a damage over time effect (e.g. Does 10 damage per second for 4 seconds)? You have to apply something to an entity and have it stick: AbilityEffect.

That was the biggest revelation for me: separate the responsibility of applying effects and the actual damage/healing effect to a different object altogether. Ok great… now I can just put a collection of effects on a TacticalEntity and an ability can apply effects to the entity! Wait… who is responsible for removing the effects once they expire? For that matter, who is responsible for keeping track of all the effects?

Of course the effect could probably have handled all of this, and the entity itself could have removed effects from itself when they expire, but that’s not the responsibility of the TacticalEntity. It already has a lot of code and does enough. That is where EffectManager comes along. It’s a static class that has an Activity() method that gets called every frame, and it gets the honor of keeping track of a Dictionary<TacticalEntity, List> which holds all effects that are applied to all TacticalEntities.

In both of the examples above, adhering to the Single Responsibility Principle drove me to make decisions which keep my code more concise and maintainable. Any time you hear yourself saying you can use a single class to do multiple different things, you should ask yourself if it would work better split into separate responsibilities.

I haven’t done an AOE ability yet, but that is all about targetting and figuring out which entities to apply effects to outside of the whole ability/effect/manager classes as described above. Without rambling any further, here is the code as it stands right now:

BasicAbility.cs:

    public class BasicAbility : IAbility
    {
        public BasicAbility(List effects)
        {
            Effects = effects;
        }

        public void execute(TacticalEntity source, List destination)
        {
            foreach (TacticalEntity entity in destination)
            {
                execute(source, entity);
            }
        }

        public void execute(TacticalEntity source, TacticalEntity destination)
        {
            Projectile projectile = Factories.ProjectileFactory.CreateNew();
            projectile.Position = source.Position;
            projectile.SourceEntity = source;
            projectile.TargetEntity = destination;
            projectile.SpriteAnimate = true;
            projectile.Ability = this;
        }

        public List Effects
        {
            get;
            private set;
        }
    }

AbilityEffect.cs:

    public class AbilityEffect : IAbilityEffect
    {
        private bool ConstantEffectsApplied = false;

        public AbilityEffect()
        {
            ConstantEffectsApplied = false;
        }

        public AbilityEffect(bool tickImmediately, int healthPerTick, int speedEffect, int defenseEffect, float aggroRadiusEffect, int strengthEffect, int totalticks, string name)
        {
            TickImmediately = tickImmediately;
            HealthEffectPerTick = healthPerTick;
            SpeedEffectWhileActive = speedEffect;
            DefenseEffectWhileActive = defenseEffect;
            AggroRadiusEffectWhileActive = aggroRadiusEffect;
            StrengthEffectWhileActive = strengthEffect;
            TotalTicks = totalticks;
            Name = name;
            ConstantEffectsApplied = false;
        }

        public AbilityEffect(TacticalEntity source, TacticalEntity affectedEntity, IAbilityEffect that)
            : this(that.TickImmediately, that.HealthEffectPerTick, that.SpeedEffectWhileActive, that.DefenseEffectWhileActive, that.AggroRadiusEffectWhileActive, that.StrengthEffectWhileActive, that.TotalTicks, that.Name)
        {
            AffectedEntity = affectedEntity;
            SourceEntity = source;
        }

        public int HealthEffectPerTick
        {
            set;
            get;
        }

        public int SpeedEffectWhileActive
        {
            set;
            get;
        }

        public int DefenseEffectWhileActive
        {
            set;
            get;
        }

        public float AggroRadiusEffectWhileActive
        {
            set;
            get;
        }

        public int StrengthEffectWhileActive
        {
            set;
            get;
        }

        private int _TotalTicks;
        public int TotalTicks
        {
            get
            {
                return _TotalTicks;
            }
            private set
            {
                _TotalTicks = value;
                TicksRemaining = value;
            }
        }

        public int TicksRemaining
        {
            get;
            private set;
        }

        public string Name
        {
            get;
            set;
        }

        public bool Active
        {
            get
            {
                return TicksRemaining > 0;
            }
        }

        public TacticalEntity AffectedEntity
        {
            get;
            set;
        }

        public void ApplyConstantEffects()
        {
            AffectedEntity.strengthEffects += StrengthEffectWhileActive;
            AffectedEntity.defenseEffects += DefenseEffectWhileActive;
            AffectedEntity.speedEffects += SpeedEffectWhileActive;
            AffectedEntity.aggroCircleRadiusEffects += AggroRadiusEffectWhileActive;
        }

        public void RemoveConstantEffects()
        {
            AffectedEntity.strengthEffects -= StrengthEffectWhileActive;
            AffectedEntity.defenseEffects -= DefenseEffectWhileActive;
            AffectedEntity.speedEffects -= SpeedEffectWhileActive;
            AffectedEntity.aggroCircleRadiusEffects -= AggroRadiusEffectWhileActive;
        }

        public void ApplyEffectTick()
        {
            if (!ConstantEffectsApplied)
            {
                ApplyConstantEffects();
                ConstantEffectsApplied = true;
            }
            if (Active)
            {
                AffectedEntity.health += HealthEffectPerTick;
                --TicksRemaining;
            }
        }

        public IAbilityEffect Clone(TacticalEntity source, TacticalEntity entity)
        {
            return new AbilityEffect(SourceEntity, entity, this);
        }

        public TacticalEntity SourceEntity
        {
            get;
            private set;
        }

        public bool TickImmediately { get; set; }
    }

EffectManager.cs:

    public static class EffectManager
    {
        private static Dictionary<TacticalEntity, List> entityEffects = new Dictionary<TacticalEntity, List>(20);
        private static double lasttick = TimeManager.CurrentTime;

        public static void AddEffectsToEntity(TacticalEntity source, TacticalEntity entity, List effects)
        {
            if (!entityEffects.ContainsKey(entity))
            {
                entityEffects.Add(entity, new List(effects.Count));
            }
            foreach (IAbilityEffect effect in effects)
            {
                AddEffectToEntity(source, entity, effect);
            }
        }

        public static void AddEffectToEntity(TacticalEntity source, TacticalEntity entity, IAbilityEffect effect)
        {
            IAbilityEffect newEffect = effect.Clone(source, entity);
            List effects;
            if (!entityEffects.ContainsKey(entity))
            {
                effects = new List(1);
                entityEffects.Add(entity, effects);
            }
            else
            {
                effects = entityEffects[entity];
            }
            effects.Add(newEffect);

            if (newEffect.Active && newEffect.TickImmediately)
            {
                newEffect.ApplyEffectTick();
            }
        }

        public static void Activity()
        {
            if ((TimeManager.CurrentTime - lasttick) > 1.0)
            {
                lasttick = TimeManager.CurrentTime;
                foreach (KeyValuePair<TacticalEntity, List> pair in entityEffects)
                {
                    TickAndRemoveInactiveEffects(pair);
                }
            }
        }

        private static void TickAndRemoveInactiveEffects(KeyValuePair<TacticalEntity, List> pair)
        {
            foreach (IAbilityEffect effect in pair.Value)
            {
                effect.ApplyEffectTick();
            }

            for (int x = pair.Value.Count - 1; x >= 0; --x)
            {
                if (!pair.Value[x].Active)
                {
                    pair.Value[x].RemoveConstantEffects();
                    pair.Value.RemoveAt(x);
                }
            }
        }
    }

And the call to attack another entity:

        public virtual void attack(TacticalEntity attackableEntity)
        {
            if (this.currentAbility != null &&
                this.attackCircle.CollideAgainst(attackableEntity.hitCircle))
            {
                this.currentAbility.execute(this, attackableEntity);                
            }
        }

Putting the IGameActionManager interface to work

Things have been pretty hectic for me lately. I have 3 week old twins and barely enough time to do much of anything for myself. My coding comes in bits and pieces here and there, but I have managed to make some strides on a game I have been working on. I am not ready to announce anything regarding it yet as it’s still just a prototype, but if it starts coming together to a point where I think it will be seen to fruition, I will definitely share.

To the topic at hand!

I posted some time ago prior to the Windows Phone 7 initial release about an idea I had for managing input between devices, and simplifying input management. Here is the post. I knew that the theory was sound, but I recently got to put it to the test. I have been developing a game prototype as a Windows project, knowing that XNA is cross platform. The plan has always been to release the game for the Windows Phone primarily, but I want to be able to run and debug on the PC for simplicity, so I have been doing so with the hope that some time I would port the thing to WP7. Recently I did just that, and that really gave my IGameActionManager abstraction a workout. I believe I have finally ironed out the best pattern to mimic the previous/current state of the KeyboardState and MouseState objects while keeping the game code focused, simple, and unencumbered by platform specific concerns.

I submit to you the results:

The interface becomes very simple:

public interface IGameActionManager
{
	GameActionState GetState();
}

Notice how it returns a new class GameActionState. That class is a read only data object that defines the state of input at any given frame. Here it is:

public class GameActionState
{
	public GameActionState(bool isJumping, 
	Vector2 motion, 
	Vector2? cameraVelocity,
	bool isIdle, 
	bool isFiring, 
	Vector2? screenSelectionType1, 
	Vector2? screenSelectionType2, 
	bool isQuitting, float zoomChange,
		Point? indicatorLocation)
	{
		_isJumping = isJumping;
		_motion = motion;
		_cameraVelocity = cameraVelocity;
		_isIdle = isIdle;
		_isFiring = isFiring;
		_screenSelectionType1 = screenSelectionType1;
		_screenSelectionType2 = screenSelectionType2;
		_isQuitting = isQuitting;
		_zoomChange = zoomChange;
		_indicatorLocation = indicatorLocation;
	}

	private Point? _indicatorLocation;
	public Point? IndicatorLocation
	{
		get
		{
			return _indicatorLocation;
		}
	}

	private float _zoomChange;
	public float ZoomChange
	{
		get
		{
			return _zoomChange;
		}
	}

	private bool _isJumping;
	public bool IsJumping
	{
		get
		{
			return _isJumping;
		}
	}

	private Vector2 _motion;
	public Vector2 Motion
	{
		get
		{
			return _motion;
		}
	}

	private bool _isIdle;
	public bool IsIdle
	{
		get
		{
			return _isIdle;
		}
	}

	private bool _isFiring;
	public bool IsFiring
	{
		get
		{
			return _isFiring;
		}
	}

	private Vector2? _screenSelectionType1;
	public Vector2? ScreenSelectionType1
	{
		get
		{
			return _screenSelectionType1;
		}
	}

	private Vector2? _screenSelectionType2;
	public Vector2? ScreenSelectionType2
	{
		get
		{
			return _screenSelectionType2;
		}
	}

	private bool _isQuitting;
	public bool IsQuitting
	{
		get
		{
			return _isQuitting;
		}
	}

	private Vector2? _cameraVelocity;
	public Vector2? CameraVelocity
	{
		get
		{
			return _cameraVelocity;
		}
	}
}

I made note of the fact that I used digital rather than analog values for some fields like IsMovingUp, IsJumping, etc, where it might be beneficial to use their analog equivalent. Here I have illustrated a partial change to that mindset.

Notice the Vector2 returned for the Motion property.. Basically, the thought here is that in a single Vector, I can return the direction and magnitude of motion instead of relying on some speed modifier and 1 dimensional digital booleans. In the concrete implementations (GameActionManagerWindows and GameActionManagerWindowsPhone), you can use anything to specify this value. For instance, as you will see in my implementations later in this post, I use the directional arrows on the keyboard for the motion vector, but in the phone implementaion I poll for the the FreeDrag GestureType. The important thing to note here is that the game code doesn’t care what you use. It just grabs the motion property off the GameActionState and uses it, oblivious to where that value came from (Side note: I always find it interesting how we talk about code as if it thinks or has a personality. I have seen this from so many developers, myself included and it’s somewhat fascinating to me.)

The other things I want to point out before moving on to pasting the implementations for windows and wp7 are these two properties:

private Vector2? _screenSelectionType1;
public Vector2? ScreenSelectionType1
{
	get
	{
		return _screenSelectionType1;
	}
}

private Vector2? _screenSelectionType2;
public Vector2? ScreenSelectionType2
{
	get
	{
		return _screenSelectionType2;
	}
}

Notice the Vector2? (aka Nullable) return type. That allows me to specify to the Game basically two different states for the game action: A selection was made (with coordinates) or no selection was made (null). In an otherwise non nullable value type (struct), Nullableis a great way to virtually turn that into a type which can have null assignments. It’s often useful in certain situations such as this where specifying some special value for the actual Vector2 wouldn’t help, because it’s also valid input (i.e. I can’t just use Vector.Zero as a way to specify that no selection was made, because Vector.Zero is a completely valid selection). Vector2(-1, -1) also just seemed strange to use since I’m not really sure what the coordinate system should look like, and there is the perfectly viable Nullable<> wrapper to use, so why not!

On to the implementations!

Windows/PC:

public class GameActionManagerWindows : IGameActionManager
{
	public GameActionState GetState()
	{
		KeyboardState keyboardState = Keyboard.GetState();
		MouseState mouseState = Mouse.GetState();
		Vector2? selectionType1 = null;
		Vector2? selectionType2 = null;

		if (mouseState.LeftButton == ButtonState.Pressed)
		{
			selectionType1 = new Vector2(mouseState.X, mouseState.Y);
		}

		if (mouseState.RightButton == ButtonState.Pressed)
		{
			selectionType2 = new Vector2(mouseState.X, mouseState.Y);
		}

		bool isJumping = keyboardState.IsKeyDown(Keys.Space);
		Vector2 motion = Vector2.Zero;

		if (keyboardState.IsKeyDown(Keys.Right))
		{
			motion.X += 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Left))
		{
			motion.X -= 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Up))
		{
			motion.Y -= 1.0f;
		}
		if (keyboardState.IsKeyDown(Keys.Down))
		{
			motion.Y += 1.0f;
		}

		bool isFiring = mouseState.RightButton == ButtonState.Pressed;

		bool isIdle = !keyboardState.IsKeyDown(Keys.Up) &&
			!keyboardState.IsKeyDown(Keys.Down) && !keyboardState.IsKeyDown(Keys.Left) &&
			!keyboardState.IsKeyDown(Keys.Right) && !keyboardState.IsKeyDown(Keys.Space);

		bool isQuitting = keyboardState.IsKeyDown(Keys.Escape);

		bool isZoomingIn = keyboardState.IsKeyDown(Keys.OemPlus);
		bool isZoomingOut = keyboardState.IsKeyDown(Keys.OemMinus);

		return new GameActionState(false, motion, Vector2.Zero,
			isIdle, isFiring, selectionType1, selectionType2, isQuitting,
			isZoomingIn ? 1.0f : isZoomingOut ? -1.0f : 0.0f,
			new Point(mouseState.X, mouseState.Y));
	}
}

I chose to use the keyboard plus/minus to zoom, but you could just as easily use the mouse scroll wheel here by polling the MouseState.

The WP7 Implementation:

#if WINDOWS_PHONE
using Microsoft.Xna.Framework.Input.Touch;
#endif

#if WINDOWS_PHONE
    public class GameActionManagerWindowsPhone : IGameActionManager
    {
        Vector2 MaxCameraVelocity = new Vector2(5f, 5f);
        public GameActionState GetState()
        {

            TouchPanel.EnabledGestures =
                GestureType.Hold |
                GestureType.Tap |
                GestureType.DoubleTap |
                GestureType.FreeDrag |
                GestureType.Flick |
                GestureType.Pinch;

            // we use raw touch points for selection, since they are more appropriate
            // for that use than gestures. so we need to get that raw touch data.
            TouchCollection touches = TouchPanel.GetState(); 

            Vector2? selectionType1 = null;
            Vector2? selectionType2 = null;

             // next we handle all of the gestures. since we may have multiple gestures available,
            // we use a loop to read in all of the gestures. this is important to make sure the
            // TouchPanel's queue doesn't get backed up with old data
            float zoomChange = 0.0f;
            Vector2 motion = Vector2.Zero;
            Vector2? cameraVelocity = null;

            while (TouchPanel.IsGestureAvailable)
            {
                // read the next gesture from the queue
                GestureSample gesture = TouchPanel.ReadGesture();

                switch (gesture.GestureType)
                {
                    case GestureType.Pinch:
                        // get the current and previous locations of the two fingers
                        Vector2 a = gesture.Position;
                        Vector2 aOld = gesture.Position - gesture.Delta;
                        Vector2 b = gesture.Position2;
                        Vector2 bOld = gesture.Position2 - gesture.Delta2;

                        // figure out the distance between the current and previous locations
                        float d = Vector2.Distance(a, b);
                        float dOld = Vector2.Distance(aOld, bOld);

                        // calculate the difference between the two and use that to alter the scale
                        zoomChange = (d - dOld) * .015f;

                        // Allow dragging while pinching by taking the average of the two touch points' deltas
                        motion = (gesture.Delta + gesture.Delta2) / 2;
                        break;
                    case GestureType.FreeDrag:
                        motion = gesture.Delta;
                        cameraVelocity = Vector2.Zero;
                        break;
                    case GestureType.Hold:
                        selectionType2 = gesture.Position;
                        break;
                    case GestureType.Tap:
                        selectionType1 = gesture.Position;
                        break;
                    case GestureType.Flick:
                        cameraVelocity = Vector2.Clamp(gesture.Delta, -MaxCameraVelocity, MaxCameraVelocity);
                        break;
                }
            }

            return new GameActionState(false, motion, cameraVelocity, false, false, selectionType1, selectionType2, false, zoomChange, Point.Zero);
        }
    }
#endif

I separated that into two #if blocks since it’s the using statement at the top and also the class itself that contains windows phone specific code. The #if blocks are only there to ensure that the windows project can still compile with the phone specific code in it, since the WINDOWS_PHONE directive will not be defined in that project, and therefore the compiler won’t use that code.

And the grand finale… the Game code in all its simplified glory!

To define the proper GameActionManager, in the Initialize() method of the Game class:

#if WINDOWS_PHONE
            gameActionManager = new GameActionManagerWindowsPhone();
// You could do the else if here for XBOX, but I am not targeting XBLIG yet
#else
            gameActionManager = new GameActionManagerWindows();
#endif

And the Update() method:

        protected override void Update(GameTime gameTime)
        {
            gameActionState = gameActionManager.GetState();

            if (gameActionState.IsQuitting)
            {
                this.Exit();
            }

            updateCamera();
            updateUnits();
            if (bullet.alive)
            {
                bullet.currentPosition += bullet.velocity;
            }
            handleSelections();
            handleFiring();

            previousGameActionState = gameActionState;

            base.Update(gameTime);
        }

private void updateCamera()
        {
            if (camera.Velocity.X != 0.0f || camera.Velocity.Y != 0.0f)
            {
                camera.Update();
            }

            if (gameActionState.CameraVelocity != null)
            {
                camera.Velocity = gameActionState.CameraVelocity.Value;
            }

            camera.Pos -= gameActionState.Motion;
            camera.Zoom += gameActionState.ZoomChange;
        }

Notice particularly the updateCamera() method… Camera is a Camera2D class I used for the 2D camera, which we don’t really need to talk about… maybe I’ll do another post on that, but regardless, the Pos and Zoom properties of the camera object cause transformations on the draw calls which give the appearance of a camera moving around. The gameActionState.Motion property is used, which was defined via polymorphism on the GameActionManager class’ GetState() function override as specified in the Initialize method’s declaration of the GameActionManager variable.

Good stuff… the best part about this whole idea is that because the platform specific bits are abstracted out, the Game code itself isn’t concerned with anything to do with the platform. The Game code is now completely portable across multiple devices, and is future proof!