Cortana officially announced at Build 2014

There are a lot of cool things coming out of the Build conference this year. One of the Windows Phone features that I’m most excited about is Cortana. It’s the new personal assistant available on Windows Phone 8.1, and it looks to be extremely well done!

Here’s a video explaining:

I think one of the cooler features is the ability to have it remind you of something when you talk to someone, and the notebook feature is awesome. It really does seem like they modeled it after a personal assistant.

10 reasons to use TDD (Test Driven Development)

  1. It will help you improve your OOD

    Proper object oriented design is the key to writing good extendable, maintainable, and stable software. If you pile too much functionality into one big class or one big method, you’re just asking for trouble. TDD makes it easier to adhere to the SRP (Single Responsibility Principle) by encouraging you to create smaller classes with less functionality.

  2. Get it in front of your users faster

    Designing your classes to inherently rely on abstract (mockable) dependencies is a sure way to get on the fast track to building a demo. For example, in a database driven application, a mock data layer can be substituted with generated data to be consumed by the front end. Though somewhat unorthodox, mocking frameworks can work just as well to fake a back end system in a running application as they can for unit test mocking! In software development, getting the product in front of your users can be the most important step, because it gives them a chance to change requirements early when it’s less painful.

  3. Good coverage

    TDD will have you writing a test for every bit of functionality you are coding. You are automatically forcing yourself to have a high code coverage metric if you stick to the cadence: Write breaking test, fix test, refactor.

  4. Quickly verify all functionality

    When refactoring code, it is very useful to be able to quickly verify all existing functionality in an instant. This isn’t necessarily a benefit of TDD itself, but rather having good unit test coverage of business rules. Sometimes in an emergency situation (e.g. an outage due to code bug), we are forced to cut corners and not necessarily write a test first. Having that safety net of unit tests there is a huge confidence booster. Naturally by developing with TDD, you are building a safety net as you go!

  5. It forces you to rely only on abstractions (Dependency Inversion Principle).

    You wouldn’t solder a lamp directly to the electrical wiring in a wall, would you?

    The dependency inversion principle is one of the 5 SOLID principles of object oriented design. It states that when designing a class, any and all of the other classes that are used should be via abstractions. That is to say, a class should not reference another concrete type. When done correctly, test driven development encourages adherence to this principle, because you will always need something to mock when writing a test.

  6. Smaller problems are easier to solve.

    (2 + 2) / 2 – 2 * 6 = ?

    If you understand the order of operations (if you’re here, I’m sure you do), then your brain automatically broke this equation down into solvable parts. Likely, you figured out that 2+2 is 4, and 2 * 6 is 12, so the equation became 4/2 – 12. Then you might just solve 4/2 and finish out with -10. The point is that you broke the larger problem down into smaller chunks, because that’s the easiest way to get to the answer. (Bonus points if you can just look at equations like that and spit out an answer!). Any programmer worth their salt isn’t going to attack a large application by writing one big blob of code. They’re going to understand what the customer wants, break it down into pieces, and build those pieces to fit together for the larger system. TDD is a great way to do just that without completely understanding the big picture immediately.

  7. It feels really good

    I’ve done quite a few projects with TDD now. The first time it feels strange, like it can’t possibly work. You toil for a few days or weeks on writing these individual little bits, solving little tiny problems as you go. The larger problem is not necessarily in your brain the entire time, so it feels foreign. Finally, when it comes time to make a demo, you get to connect all the little pieces, which almost always involves an IoC container for me. This is a very satisfying process and brings me a lot of joy.

  8. Opens up the path for future testing

    This is a topic I have talked about at length. Some may not see the value in this immediately, but I find this extremely important. Simply by following the TDD pattern, you are ensuring future testability of your classes. No one writes bug-free code every time. I can’t tell you how many times I have been seriously happy when it comes time to fix a bug in code that I’ve used TDD with. I come back to find that in order to reproduce the bug, I just have to provide a very specific mock up in a new unit test. The path was laid by me in the past, and now it is super easy to prove that the bug is fixed by fixing a failed unit test.

  9. Stops analysis paralysis

    From Wikipedia:

    Analysis paralysis or paralysis of analysis is an anti-pattern, the state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken, in effect paralyzing the outcome.

    Sure, any new application needs some analysis, but when the above happens, nothing gets done. TDD allows you to get started right away by solving small problems immediately. Sometimes, the bigger picture starts to come together when you start chipping away at the little bits.

  10. Slow is smooth, and smooth is fast

    This is an old saying applied to targeting a firearm. The saying explains that if you move too fast, you’re going to make a mistake and fail. I believe the same saying can be applied to software development as well. The argument against TDD and unit tests in general that I’ve heard in the past is that they slow you down. It’s a natural thought to have: I can either start writing code to solve the problem, or start writing code to test non-existent code, and then write the same code to solve the problem anyway.

    WRONG!

    This argument infuriates me, because it typically comes from someone of power who is trying to justify cutting corners. Sure, if you put two people on solving the same complex problem, one with TDD and one hacking directly towards a solution, the latter is going to finish quicker, but that’s just not a real life scenario long term. With any given application, someone is going to need changes. While TDD might take longer in the initial phases, it pays dividends extremely quickly when changes start pouring in. The class design is decoupled, class responsibilities are seriously limited, so requirements changes very rarely actually mean changing working code! Instead, it is safer and quicker to extend and write new code. Less bugs are created, and new features can be added a lot quicker in the long run.

Spriter implementation for FlatRedBall

For over the past year, I have been working on an API and plugin to use in the FlatRedBall engine that makes it dead simple to load, play, and manipulate Spriter animations in your FlatRedBall games. The implementation is written as an extension to the FlatRedBall engine, so you get all the goodness that comes from using first-class objects that the engine understands.

A few features that other Spriter implementations may not have:

  • Positioning in 3D space
  • Scaling the entire animation
  • Setting animation speed
  • Reversing animation playback (negative speed)
  • Rotating an animation on any axis (X/Y/Z)
  • Cloning animations
  • Playing every entity in an animation file simultaneously via SpriterObjectCollection
    • Every one of the features above works on SpriterObjectCollection as well

This is just a subset of the features in my Spriter implementation. If you are interested, install the FlatRedBall development kit, and head over to the releases section to get the latest release of my plugin. It’s a simple installation process into the Glue tool you get in FRBDK. Follow the tutorials, and you’ll be creating and using Spriter animations in a real game in no time!

How to implement a generic Repository pattern w/ UnitOfWork using the Entity Framework (code-first or model/database-first)

I’ve already written a post describing why a generic repository pattern still works using EF. So, if you want to see my opinions on the usage of what I’m about to show, be my guest and click the link. I’ll wait here, I promise.

Here I’m going to show you HOW to do it, complete with code examples. I am going to assume you know how to generate or already have a DbContext object, like you get when adding a new ADO.NET Data Model to your project. This approach works with code-first as well as model/database first implementations. Either method will get you a DbContext object that you can wrap with the generic repository & unit of work.

Let’s say you are writing a game editor and you have a DbContext object called EnemyEntities:

public partial class EnemyEntities : DbContext
{
    public EnemyEntities()
        : base("name=EnemyEntities")
    {
    }

    protected override void OnModelCreating(DbModelBuilder modelBuilder)
    {
        throw new UnintentionalCodeFirstException();
    }

    public virtual DbSet<Enemy> Enemies { get; set; }
    public virtual DbSet<Ability> Abilities { get; set; }
}

I will define Enemy and Ability later.

The first thing you need is the definition of a generic repository:

public interface IRepository<T>
{
    void InsertOrUpdate(T entity);
    void Remove(T entity);
    IQueryable<T> Find(Expression<Func<T, bool>> predicate);
    IQueryable<T> FindAll();
    T First(Expression<Func<T, bool>> predicate);
    T FirstOrDefault(Expression<Func<T, bool>> predicate);
}

I also use a generic IUnitOfWork<T> that is IDisposable to force all unitofwork objects to be disposable, since they hold a DbContext object that is IDisposable:

public interface IUnitOfWork<T> : IDisposable where T : class
{
    int Save();
    DbSet<T> DbSet { get; }
    DbEntityEntry Entry(T entity);
}

On to the implementations:

public class DbContextUnitOfwork<T> : IUnitOfWork<T> where T : class
{
    private readonly DbContext _context;

    public DbContextUnitOfwork(DbContext context)
    {
        _context = context;
    }

    public void Dispose()
    {
        _context.Dispose();
    }

    public int Save()
    {
        return _context.SaveChanges();
    }

    public DbSet<T> DbSet
    {
        get
        {
            return _context.Set<T>();
        }
    }

    public DbEntityEntry Entry(T entity)
    {
        return _context.Entry(entity);
    }
}
public class EFRepository<T> : IRepository<T>
    where T : class, IEntity
{
    private readonly IUnitOfWork<T> _unitOfWork;
    private readonly DbSet<T> _dbSet;

    public EFRepository(IUnitOfWork<T> unitOfWork)
    {
        _unitOfWork = unitOfWork;
        _dbSet = _unitOfWork.DbSet;
    }

    public void InsertOrUpdate(T entity)
    {
        if (entity.Id != default(int))
        {
            if (_unitOfWork.Entry(entity).State == EntityState.Detached)
            {
                _dbSet.Add(entity);
            }
            _unitOfWork.Entry(entity).State = EntityState.Modified;
        }
        else
        {
            _dbSet.Add(entity);
            _unitOfWork.DbSet.Add(entity);
        }
    }

    public void Remove(T entity)
    {
        _dbSet.Remove(entity);
    }

    public IQueryable<T> Find(Expression<Func<T, bool>> predicate)
    {
        return FindAll().Where(predicate);
    }

    public IQueryable<T> FindAll()
    {
        return _dbSet;
    }

    public T First(Expression<Func<T, bool>> predicate)
    {
        return FindAll().First(predicate);
    }

    public T FirstOrDefault(Expression<Func<T, bool>> predicate)
    {
        return FindAll().FirstOrDefault(predicate);
    }
}

Note my use of IEntity as a restriction on T in the repository. Here is IEntity:

public interface IEntity
{
    int Id { get; }
}

Yep, that’s it. This is simply so I can ensure that whatever I’m using in a Repository<T> has the Id property, because I need a way to identify (primary key) the object generically when doing InsertOrUpdate(). Look on line 13 of this example to see where it’s used.

Using a common entity interface like this means that you have to add files to your project that correspond to the entities you want repositories for. I will usually add a Models directory, add class files for all the entities I want to use in this pattern, and then go at it changing namespaces and implementing the Id property. Here is an example for Enemy and Ability:

// Generated code from EF
public partial class Enemy
{
    public int EnemyId { get; set; }
    public string Name { get; set; }
    public IEnumerable<Ability> Abilities { get; set; }
}

// Code that you maintain in a separate file 
// which you have to force to be in the same namespace as the generated entity
public partial class Enemy : IEntity
{
    public int Id
    { 
        get
        {
            return EnemyId;
        }
    }
}

// And Ability's generated code:
public partial class Ability
{
    public int AbilityId { get; set; }
    public string Name { get; set; }
}

// And your code:
public partial class Ability : IEntity
{
    public int Id
    {
        get
        {
            return AbilityId;
        }
    }
}

Now that you have  your generic repository, an EF implementation of said repository, as well as the generic unit of work definition and one that uses dbcontext, and a couple entities to work with, how do you use it? This is the part where I typically would write unit tests to tickle out the API, but I will get straight to a potential implementation.

Imagine you’re working on a class that will perform higher level modifications to Enemies in the database. Let’s call it EnemyLogic:

public class EnemyLogic : IDisposable
{
	public EnemyLogic(IRepository<Enemy> enemyRepository, 
                      IUnitOfWork<Enemy> enemyUnitOfWork)
	{
		_enemyRepository = enemyRepository;
		_enemyUnitOfWork = enemyUnitOfWork;
	}

	public void ClearAbilities()
	{
		var enemies = _enemyRepository.Find(e => e.Name == "Baddy")
					.Include(e => e.Abilities)
					.ToList();

		foreach (var enemy in enemies)
		{
			enemy.Abilities = new List<Ability>();
		}

		_enemyUnitOfWork.Save();
	}

	public void Dispose()
	{
		_enemyUnitOfWork.Dispose();
	}
}

And to use the EnemyLogic class, you will need to provide concrete instantiated IRepository<Enemy> and IUnitOfWork<Enemy> objects, but that’s easy. I use IoC containers in everything I do, so here is an example of how to instantiate an object from Unity:

// This is typically done in a bootstrapper function that returns IUnityContainer:
var container = new UnityContainer();
container.RegisterType<IUnitOfWork<Enemy>, 
    DbContextUnitOfWork<Enemy>>(new InjectionMember[]
{
	new InjectionConstructor(new object[]
	{
		new EnemyEntities()
	})
});

container.RegisterType<IRepository<Enemy>, EFRepository<Enemy>>();

// Later on you have access to the container:
using (var enemyLogic = container.Resolve<EnemyLogic>())
{
    enemyLogic.ClearAbilities();
}

If you don’t want to use an IoC container, just instantiate it yourself:

var unitOfWork = new DbContextUnitOfWork<Enemy>(new EnemyEntities());
using (var enemyLogic = new EnemyLogic(new EFRepository<Enemy>(unitOfWork), unitOfWork))
{
	enemyLogic.ClearAbilities();
}

I do love me some Inversion of Control, though, since your concrete definitions are handled completely separate from your business logic, the business logic is testable provided you don’t have a direct call to the IoC bootstrapper in it. That’s outside of the scope of this post, though!

If I helped you or you want to send me flames, please comment below!

Why using a generic repository pattern in Entity Framework still works!

I have had lengthy conversations on the topic of using a generic repository pattern on top of Entity Framework (I’m kainazzzo). I believe that I am probably in a minority of developers that think it’s a perfectly acceptable practice. Here are some of the arguments people make about using IRepository<T>:

  1. It couples your class directly to Entity Framework, making it difficult to switch
  2. There is no value-add using a generic repository over top of EF, because it is already a Repository pattern
    • i.e. DbSet<Entity> is the Repository, and DbContext is the UnitOfWork
  3. It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code
    • Or that in general, IQueryable should not be returned from a repository, because deferred queries are leaky

I will address these points directly.

It couples your class directly to Entity Framework, making it difficult to switch

The last time I changed data access in running production code was never. There would have to be an astronomically large gulf in the functionality between two ORM frameworks for this to even be a consideration. Once code is in production and being used by real live people, any change is risk. Risk is not taken unless the change is deemed to be of some value. Simply changing ORM or data access frameworks would be a tough sell to any business unit. Once you choose Entity Framework, you will not change to NHibernate. It is just not going to happen, so this is an difficult argument to make in my opinion. If you really have a problem with it, you can still do a higher level implementation on top of EFRepository that abstracts away the rest of the EF bits.

There is no value-add using a generic repository over top of EF, because it is already a Repository pattern

BOLOGNA SANDWICH. Seriously… now I’m hungry for processed meat.

What I mean is that this is preposterous. Take my IRepository<T> class for example:

public interface IRepository<T>
    {
        void InsertOrUpdate(T entity);
        void Remove(T entity);
        IQueryable<T> Find(Expression<Func<T, bool>> predicate);
        IQueryable<T> FindAll();
        T First(Expression<Func<T, bool>> predicate);
        T FirstOrDefault(Expression<Func<T, bool>> predicate);
    }

And the EF implementation:

    public class EFRepository<T> : IRepository<T>
        where T : class, IEntity
    {
        private readonly IUnitOfWork<T> _unitOfWork;
        private readonly DbSet<T> _dbSet;

        public EFRepository(IUnitOfWork<T> unitOfWork)
        {
            _unitOfWork = unitOfWork;
            _dbSet = _unitOfWork.DbSet;
        }

        public void InsertOrUpdate(T entity)
        {
            if (entity.Id != default(int))
            {
                if (_unitOfWork.Entry(entity).State == EntityState.Detached)
                {
                    _dbSet.Add(entity);
                }
                _unitOfWork.Entry(entity).State = EntityState.Modified;
            }
            else
            {
                _dbSet.Add(entity);
                _unitOfWork.DbSet.Add(entity);
            }
        }

        public void Remove(T entity)
        {
            _dbSet.Remove(entity);
        }

        public IQueryable<T> Find(Expression<Func<T, bool>> predicate)
        {
            return FindAll().Where(predicate);
        }

        public IQueryable<T> FindAll()
        {
            return _dbSet;
        }

        public T First(Expression<Func<T, bool>> predicate)
        {
            return FindAll().First(predicate);
        }

        public T FirstOrDefault(Expression<Func<T, bool>> predicate)
        {
            return FindAll().FirstOrDefault(predicate);
        }
    }

How many times have you looked up how to do “Insert or Update” in EF? I don’t have to do that any more. I know my repository does it well, and all I have to do is make my entities implement IEntity, which simply ensures that an Id field exists. This is not a problem with DB first implementations, since EF generates classes as partials. My value-add comes from EFRepository implementing basic Id checking in order to initiate a proper update in EF.

Then, there is unit testing. Some people would argue the value-add of unit tests, but I see unit tests as a priceless artifact one puts on display and protects. They are a window into your code and how it is supposed to function, and they can be a huge safety net. See my post on TDD for more information. Let’s say you were developing a game data editor and you had a class that relied on loading Enemies (EnemyEditor), perhaps one of the simple requirements was that when loading enemies (GetEnemies()), the list of Abilities was not null, even if the database provided it as such:

[TestMethod]
public void enemies_have_non_null_ability_list()
{
    var container = new MockingContainer<EnemyEditor>();
    container.Arrange<IRepository<Enemy>>(r => r.FindAll()).Returns(
        new List<Enemy>
        {
            new Enemy
            {
                Name = "Enemy1",
                Abilities = null
            }
        });

    var enemies = container.Instance.GetEnemies();

    Assert.IsNotNull(enemies[0].Abilities);
}

This unit test is for a very small requirement, and the arrangement of what IRepository<Enemy> returned was extremely easy to write (if you’re familiar with mocking). You didn’t have to jump through any hoops to mock up the DbSet within the DbContext by creating IDbContext or anything, which admittedly is not impossible, but it isn’t entirely intuitive, all apt alliteration aside.

So there is value in layering on top of Entity Framework with a generic repository pattern. The value is in InsertOrUpdate, as well as simplified unit test mocking.

It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code

Given the value that it adds, I am totally alright with this. Yes, IQueryable.Include “leaks” implementation details through the abstraction layer, because .Include() only exists in Entity Framework; however, given that I already stated the fact that the ORM will never change, I am alright with making a conscious decision NOT to abstract away the ORM. I am not abstracting away Entity Framework… it is already an abstraction of data access that I am comfortable wtih. I am actually EXTENDING Entity Framework by adding a generic repository pattern on top of it. It is much simpler to deal with IRepository<T> and IUnitOfWork<T> than to have some GameEntities override that mocks the DbSet objects inside. Also, I will say it again: you can still use the generic repository pattern as a starting point, and further abstract away the EF specific bits by using a more domain-centric data access object.

It is my experienced opinion that software development is always about trade-offs. There is never a one-size-fits-all solution, but there are patterns that get us as far as 99% of the way there. It’s up to us as software craftspeople to massage and mold code into a product that works and is not a burden to maintain.

I leave you with two quotes from Bruce Lee:

“Don’t think, Feel, it is like a finger pointing out to the moon, don’t concentrate on the finger or you will miss all that heavenly glory.”

To me, coming up with reasons not to do the most obvious, simply because it is not exactly like you think it should be is like looking at the finger.

“All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns.”

Without adapting the known repository pattern to fit the Entity Framework appropriately, we limit ourselves to something that is not ideal for the given circumstance.

 

Improving JavaScript arrays by turning them into underscore.js “collections”

Whenever I write JavaScript code, I always end up using underscore.js in some way. It feels so natural to me now that I often will include the library in my project before ever actually needing it, since I know I can turn to it. If you’re unfamiliar with how underscore.js works, I highly recommend taking a look at the link above. They describe the library as:

Underscore is a utility-belt library for JavaScript that provides a lot of the functional programming support that you would expect in Prototype.js (or Ruby), but without extending any of the built-in JavaScript objects. It’s the tie to go along with jQuery’s tux, and Backbone.js’s suspenders.

Underscore provides 80-odd functions that support both the usual functional suspects: map, select, invoke — as well as more specialized helpers: function binding, javascript templating, deep equality testing, and so on. It delegates to built-in functions, if present, so modern browsers will use the native implementations of forEach, map, reduce, filter, every, some and indexOf.

An example of a very simple filter operation:

var simplearray = [5, 4, 3, 2, 1, 6, 7, 8, 9, 10];
var greaterThanFour = _.filter(simplearray, function(item) {
    return item > 4;
}); // greaterThanFour contains a brand new array containing [5, 6, 7, 8, 9, 10]

Sometimes the syntax just feels wrong to some. As indicated by this stack overflow question

To quote:

What I was hoping for:

Make underscore’s methods more object oriented:
_.invoke(myCollection, ‘method’); ==> myCollection.invoke(‘method’);

I’ll admit, minor difference, yet still it seems nice.

What problems will I run into if I use Backbone.Collection for non-Backbone.Models?

Are there any existing implementations, or a simple way to make a generic underscore collection class?

That got me thinking… I love JavaScript… this is now a challenge.

What is the simplest way to express an underscore “collection”?

What I came up with was to modify the Array prototype to include functions that tie into underscore. I didn’t want to maintain some large library, though, like how Backbone.JS did it. So, instead, I figured I can just pull the functions out of underscore and stick them on the Array prototype like so:

// This self-executing function pulls all the functions in the _ object and sticks them
// into the Array.prototype
(function () {
    var mapUnderscoreProperty = function (prp) {
        // This is a new function that uses underscore on the current array object
        Array.prototype[prp] = function () {
            // It builds an argument array to call with here
            var argumentsArray = [this];
            for (var i = 0; i < arguments.length; ++i) {
                argumentsArray.push(arguments[i]);
            }

            // Important to note: This strips the ability to rebind the context
            // of the underscore call
            return _[prp].apply(undefined, argumentsArray);
        };
    };

    // Loops over all properties in _, and adds the functions to the Array prototype
    for (var prop in _) {
        if (_.isFunction(_[prop])) {
            mapUnderscoreProperty(prop);
        }
    }
})();

Here is the same example as above, written with the new Array prototype:

var simplearray = [5, 4, 3, 2, 1, 6, 7, 8, 9, 10];
var greaterThanFour = simplearray.filter(function(item) {
    return item > 4;
}); // greaterThanFour contains a brand new array containing [5, 6, 7, 8, 9, 10]

I like the syntax better. It feels like the filter function is part of the Array object. Actually, because of JavaScript's dynamic nature, it IS.

Test Driven Development (TDD) vs. Traditional Testing

TDD can get you pretty far, but integration testing is necessary.

The subject of this post is a bit of a misnomer, because the two are not mutually exclusive. That is to say, test driven development is not a replacement for testing. In fact, test driven development has less to do about testing than it does about design. TDD should drive your class design in such a way that makes it easier to get to the real testing phase. There are certain cases that are not going to be apparent during the initial class design, so when developing any application, testing should begin as soon as possible. TDD Can get you there faster, because a lot of the pieces of the application can be faked since the design is testable!

Anything that gets mocked in a unit test can be faked in a built and deployed application, so business users, UX specialists, and designers will get a chance to play with the app to tweak requirements very early in the process! This early change is a lot less risky than late change when an application is “done” because the whole system is less complex.

Take for example my most recent project for which I am using TDD. It is a gesture library for desktop games using XNA (not windows 8 store games). I created a GestureProvider object which relied on an ITouchEventProvider interface to receive raw TouchEvent objects and return GestureSample objects. Using just four conceived objects, I wrote some simple tests that would prove Tap, Drag, and Pinch gestures could be detected given the proper touch events.

The tap test went something like…

[TestMethod]  
public void single_tap_registers_from_one_touch()  
{  
    // Given a mock ITouchEventProvider that returns the following
    _container  
        .Arrange<ITouchEventProvider>(p => p.Events)  
        .Returns(new List  
        {  
            new TouchEvent  //    touch down at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Down,  
                TimeStamp = DateTime.Now  
            },   
            new TouchEvent  //    touch up at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Up,  
                TimeStamp = DateTime.Now.AddMilliseconds(200.0)  
            }  
        });  

    var gestureProvider = _container.Instance;  

    // Get gestures from the real GestureProvider object (system under test or SUT)
    var samples = gestureProvider.GetSamples();  

    // Assert that there is one GestureSample object for a Tap at 0,0 
    var gestureSamples = samples.ToList();  
    Assert.AreEqual(1, gestureSamples.Count);  

    var tap = gestureSamples[0];  
    Assert.AreEqual(Vector2.Zero, tap.Delta);  
    Assert.AreEqual(Vector2.Zero, tap.Delta2);  
    Assert.AreEqual(Vector2.Zero, tap.Position);  
}

I did that test, and another for Drag and Pinch. Everything seemed to be going so well that I wanted to test them out, because I had a sneaking suspicion that I was missing something. I wrote up a quick test for a real ITouchEventProvider implementation that would use an interop library to listen for events, and provide them to the GestureProvider. I fired up a real game and added the necessary code to use the GestureProvider. I noticed one thing right away: Tap was not registering as a tap, but instead it was a drag. I double checked my tests, and it all looked ok, so I had to debug a bit. Eventually I found that my assumption about what events would fire for a tap was flawed. There could be any number of “move” events between “down” and “up”. I made the quick fix to add one move event to the test arrangement and fixed the GestureProvider so that the test passed, and then it worked. This proves that integration testing is a very important step in any system.

My unit test alone did not make the whole system work, but via TDD, I had a designed the classes such that there was a clear path to fix the test so that it satisfied the real-world scenario instead of the erroneous assumption I made. Chalk up another TDD win!

How does JavaScript Scope work?

In a previous post, I reviewed how JavaScript treats the “this” keyword.. In this post, I want to talk about how JavaScript defines scope. As a C# programmer coming to JavaScript a few years ago, I did not know this fact, and thus I assumed my C# knowledge of “this” and scope would follow to JS.

Only functions create scope in JavaScript

Take the following C# code for example:

public int x = 1; // this is a class member, and thus is scoped to the class instance
void Foo()
{
    if (true)
    {
        int i = 1;
    }
    // i is inaccessible here, because it is scoped to the if block
}

And the following javascript code:

var x = 1; // Variables not declared in a function are global
function foo() {
    if (true) {
        var i = 1;
    }

    alert(i); // This is perfectly legal, and I is accessible here.

    // Any variables declared here are scoped to the function foo.
    // To force scope:
    (function () {
        var y = 1;
        z = 2; // Declare a variable in the global scope by leaving out the var keyword!
    })();
    // y is not accessible here, because it was declared inside of a function
    // an anonymous self executing function still creates scope
}

alert(z); // z is undefined
foo();
alert(z); // z is 2 after running foo().
alert(window.z); // z is attached to the window object because it was declared without var!

Pay attention to the comments, please. Especially the bit about leaving out var, creating a globally scoped variable attached to window.

This can be a big sticking point for developers coming from C# or Java where scope is very different. Many bloggers will take this type of post to the extreme and explain other concepts like closures and prototype, as well as combining the topic of context binding the “this” keyword, but I am keeping this succinct for a reason. I’ve already covered “this” in a previous post, and I can probably do a post on closures and using the prototype more in depth another time.

In my opinion, this topic stands on its own as one of the most confusing points for a developer that is new to JavaScript, so it deserves a separate post.

Multi touch and gesture input in Windows with DPI scaling

I mentioned in a previous post that I am working on a new project related to gesture input. That very day, I hit a wall regarding desktop scaling, and last night I broke through it! Perhaps a topic for another post: with some applications, TDD can get you to a certain point, but integration testing is a must pretty early on.

The FRBTouch project is no exception! There are a few different problems to solve with this project:

  • Touch event capturing
  • Gesture detection
    • Taking touch events and making gestures
    • e.g. One touch event down then up is a tap
  • Coordinate translation
    • Taking window coordinates and translating them in an application (e.g. a FlatRedBall game)

The first two bullet points turned out to be the easiest, because they were mockable. For instance:

        public void single_tap_registers_from_one_touch()
        {
            // Arrange
            _container
                .Arrange<ITouchEventProvider>(p => p.Events)
                .Returns(new List<TouchEvent>
                {
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Down,
                        TimeStamp = DateTime.Now
                    },
                    new TouchEvent
                    {
                      Id = 1,
                      Position = Vector2.Zero,
                      Action = TouchEvent.TouchEventAction.Move,
                      TimeStamp = DateTime.Now.AddMilliseconds(10)
                    },
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Up,
                        TimeStamp = DateTime.Now.AddMilliseconds(200.0)
                    }
                });
            var gestureProvider = _container.Instance;

            // Act
            var samples = gestureProvider.GetSamples();

            // Assert
            var gestureSamples = samples.ToList();
            Assert.AreEqual(1, gestureSamples.Count);

            var tap = gestureSamples[0];
            Assert.AreEqual(Vector2.Zero, tap.Delta);
            Assert.AreEqual(Vector2.Zero, tap.Delta2);
            Assert.AreEqual(Vector2.Zero, tap.Position);
        }

That’s the test that proves a tap gesture is detectable given how the touch events are provided. It was easy to setup a mock scenario for drag and pinch as well, and just assert the required gesture return values. The TouchEvent object maps pretty closely to the events that User32.dll provides, also, so there wasn’t that much to test for actually capturing events.

The major problems came when attempting to translate coordinates from touching an XNA game window into world coordinates. I use a surface pro for all development, and it is pretty much a necessity to have 150% scaling on at all times, because the size of the screen is small. Windows scales all windows up, but in doing so it breaks the coordinate system for touch input. This is not something you can see or solve with test driven development (at least not traditional unit tests), because it requires a live scaled window and graphics object to operate.

To solve the problem, one simply has to disable the auto scaling, and tell Windows that the application will handle understanding the DPI settings. You have to make your application DPI Aware. (More info). The window will then not auto-scale, and the coordinate system will not be broken, so normal translation routines will work.