10 reasons to use TDD (Test Driven Development)

  1. It will help you improve your OOD

    Proper object oriented design is the key to writing good extendable, maintainable, and stable software. If you pile too much functionality into one big class or one big method, you’re just asking for trouble. TDD makes it easier to adhere to the SRP (Single Responsibility Principle) by encouraging you to create smaller classes with less functionality.

  2. Get it in front of your users faster

    Designing your classes to inherently rely on abstract (mockable) dependencies is a sure way to get on the fast track to building a demo. For example, in a database driven application, a mock data layer can be substituted with generated data to be consumed by the front end. Though somewhat unorthodox, mocking frameworks can work just as well to fake a back end system in a running application as they can for unit test mocking! In software development, getting the product in front of your users can be the most important step, because it gives them a chance to change requirements early when it’s less painful.

  3. Good coverage

    TDD will have you writing a test for every bit of functionality you are coding. You are automatically forcing yourself to have a high code coverage metric if you stick to the cadence: Write breaking test, fix test, refactor.

  4. Quickly verify all functionality

    When refactoring code, it is very useful to be able to quickly verify all existing functionality in an instant. This isn’t necessarily a benefit of TDD itself, but rather having good unit test coverage of business rules. Sometimes in an emergency situation (e.g. an outage due to code bug), we are forced to cut corners and not necessarily write a test first. Having that safety net of unit tests there is a huge confidence booster. Naturally by developing with TDD, you are building a safety net as you go!

  5. It forces you to rely only on abstractions (Dependency Inversion Principle).

    You wouldn’t solder a lamp directly to the electrical wiring in a wall, would you?

    The dependency inversion principle is one of the 5 SOLID principles of object oriented design. It states that when designing a class, any and all of the other classes that are used should be via abstractions. That is to say, a class should not reference another concrete type. When done correctly, test driven development encourages adherence to this principle, because you will always need something to mock when writing a test.

  6. Smaller problems are easier to solve.

    (2 + 2) / 2 – 2 * 6 = ?

    If you understand the order of operations (if you’re here, I’m sure you do), then your brain automatically broke this equation down into solvable parts. Likely, you figured out that 2+2 is 4, and 2 * 6 is 12, so the equation became 4/2 – 12. Then you might just solve 4/2 and finish out with -10. The point is that you broke the larger problem down into smaller chunks, because that’s the easiest way to get to the answer. (Bonus points if you can just look at equations like that and spit out an answer!). Any programmer worth their salt isn’t going to attack a large application by writing one big blob of code. They’re going to understand what the customer wants, break it down into pieces, and build those pieces to fit together for the larger system. TDD is a great way to do just that without completely understanding the big picture immediately.

  7. It feels really good

    I’ve done quite a few projects with TDD now. The first time it feels strange, like it can’t possibly work. You toil for a few days or weeks on writing these individual little bits, solving little tiny problems as you go. The larger problem is not necessarily in your brain the entire time, so it feels foreign. Finally, when it comes time to make a demo, you get to connect all the little pieces, which almost always involves an IoC container for me. This is a very satisfying process and brings me a lot of joy.

  8. Opens up the path for future testing

    This is a topic I have talked about at length. Some may not see the value in this immediately, but I find this extremely important. Simply by following the TDD pattern, you are ensuring future testability of your classes. No one writes bug-free code every time. I can’t tell you how many times I have been seriously happy when it comes time to fix a bug in code that I’ve used TDD with. I come back to find that in order to reproduce the bug, I just have to provide a very specific mock up in a new unit test. The path was laid by me in the past, and now it is super easy to prove that the bug is fixed by fixing a failed unit test.

  9. Stops analysis paralysis

    From Wikipedia:

    Analysis paralysis or paralysis of analysis is an anti-pattern, the state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken, in effect paralyzing the outcome.

    Sure, any new application needs some analysis, but when the above happens, nothing gets done. TDD allows you to get started right away by solving small problems immediately. Sometimes, the bigger picture starts to come together when you start chipping away at the little bits.

  10. Slow is smooth, and smooth is fast

    This is an old saying applied to targeting a firearm. The saying explains that if you move too fast, you’re going to make a mistake and fail. I believe the same saying can be applied to software development as well. The argument against TDD and unit tests in general that I’ve heard in the past is that they slow you down. It’s a natural thought to have: I can either start writing code to solve the problem, or start writing code to test non-existent code, and then write the same code to solve the problem anyway.

    WRONG!

    This argument infuriates me, because it typically comes from someone of power who is trying to justify cutting corners. Sure, if you put two people on solving the same complex problem, one with TDD and one hacking directly towards a solution, the latter is going to finish quicker, but that’s just not a real life scenario long term. With any given application, someone is going to need changes. While TDD might take longer in the initial phases, it pays dividends extremely quickly when changes start pouring in. The class design is decoupled, class responsibilities are seriously limited, so requirements changes very rarely actually mean changing working code! Instead, it is safer and quicker to extend and write new code. Less bugs are created, and new features can be added a lot quicker in the long run.

How does JavaScript Scope work?

In a previous post, I reviewed how JavaScript treats the “this” keyword.. In this post, I want to talk about how JavaScript defines scope. As a C# programmer coming to JavaScript a few years ago, I did not know this fact, and thus I assumed my C# knowledge of “this” and scope would follow to JS.

Only functions create scope in JavaScript

Take the following C# code for example:

public int x = 1; // this is a class member, and thus is scoped to the class instance
void Foo()
{
    if (true)
    {
        int i = 1;
    }
    // i is inaccessible here, because it is scoped to the if block
}

And the following javascript code:

var x = 1; // Variables not declared in a function are global
function foo() {
    if (true) {
        var i = 1;
    }

    alert(i); // This is perfectly legal, and I is accessible here.

    // Any variables declared here are scoped to the function foo.
    // To force scope:
    (function () {
        var y = 1;
        z = 2; // Declare a variable in the global scope by leaving out the var keyword!
    })();
    // y is not accessible here, because it was declared inside of a function
    // an anonymous self executing function still creates scope
}

alert(z); // z is undefined
foo();
alert(z); // z is 2 after running foo().
alert(window.z); // z is attached to the window object because it was declared without var!

Pay attention to the comments, please. Especially the bit about leaving out var, creating a globally scoped variable attached to window.

This can be a big sticking point for developers coming from C# or Java where scope is very different. Many bloggers will take this type of post to the extreme and explain other concepts like closures and prototype, as well as combining the topic of context binding the “this” keyword, but I am keeping this succinct for a reason. I’ve already covered “this” in a previous post, and I can probably do a post on closures and using the prototype more in depth another time.

In my opinion, this topic stands on its own as one of the most confusing points for a developer that is new to JavaScript, so it deserves a separate post.

Multi touch and gesture input in Windows with DPI scaling

I mentioned in a previous post that I am working on a new project related to gesture input. That very day, I hit a wall regarding desktop scaling, and last night I broke through it! Perhaps a topic for another post: with some applications, TDD can get you to a certain point, but integration testing is a must pretty early on.

The FRBTouch project is no exception! There are a few different problems to solve with this project:

  • Touch event capturing
  • Gesture detection
    • Taking touch events and making gestures
    • e.g. One touch event down then up is a tap
  • Coordinate translation
    • Taking window coordinates and translating them in an application (e.g. a FlatRedBall game)

The first two bullet points turned out to be the easiest, because they were mockable. For instance:

        public void single_tap_registers_from_one_touch()
        {
            // Arrange
            _container
                .Arrange<ITouchEventProvider>(p => p.Events)
                .Returns(new List<TouchEvent>
                {
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Down,
                        TimeStamp = DateTime.Now
                    },
                    new TouchEvent
                    {
                      Id = 1,
                      Position = Vector2.Zero,
                      Action = TouchEvent.TouchEventAction.Move,
                      TimeStamp = DateTime.Now.AddMilliseconds(10)
                    },
                    new TouchEvent
                    {
                        Id = 1,
                        Position = Vector2.Zero,
                        Action = TouchEvent.TouchEventAction.Up,
                        TimeStamp = DateTime.Now.AddMilliseconds(200.0)
                    }
                });
            var gestureProvider = _container.Instance;

            // Act
            var samples = gestureProvider.GetSamples();

            // Assert
            var gestureSamples = samples.ToList();
            Assert.AreEqual(1, gestureSamples.Count);

            var tap = gestureSamples[0];
            Assert.AreEqual(Vector2.Zero, tap.Delta);
            Assert.AreEqual(Vector2.Zero, tap.Delta2);
            Assert.AreEqual(Vector2.Zero, tap.Position);
        }

That’s the test that proves a tap gesture is detectable given how the touch events are provided. It was easy to setup a mock scenario for drag and pinch as well, and just assert the required gesture return values. The TouchEvent object maps pretty closely to the events that User32.dll provides, also, so there wasn’t that much to test for actually capturing events.

The major problems came when attempting to translate coordinates from touching an XNA game window into world coordinates. I use a surface pro for all development, and it is pretty much a necessity to have 150% scaling on at all times, because the size of the screen is small. Windows scales all windows up, but in doing so it breaks the coordinate system for touch input. This is not something you can see or solve with test driven development (at least not traditional unit tests), because it requires a live scaled window and graphics object to operate.

To solve the problem, one simply has to disable the auto scaling, and tell Windows that the application will handle understanding the DPI settings. You have to make your application DPI Aware. (More info). The window will then not auto-scale, and the coordinate system will not be broken, so normal translation routines will work.

Bust that cache!

<shamelessplug>I don’t know if I have shared this, but I work for Caesars Entertainment doing web development. Go make a hotel reservation and you’ll be using everything that my work has been for the last 2 years.</shamelessplug>

Anyway, as web development goes, the site is very javascript heavy, and we have some core functionality existing in a javascript file linked via a

<script type="text/javascript" language="javascript" src="blahblah.js"></script>

tag. Regardless of the specifics, certain browsers end up being a pain when it comes to caching. For instance, we make a change to blahblah.js, but the browser has an old version of it cached for speedy loading and it ends up not picking up the change. It takes some time for the browser to get around to updating, and every browser is different, so I did some searching.

I found a site (http://davidwalsh.name/prevent-cache) which basically says to do this:

<script type="text/javascript" language="javascript" src="blahblah.js?(insert some dynamic number based on current time here)"></script>

This effectively makes the browser see a new reference for the javascript file every time it loads the page… not completely ideal for all situations. Namely, browser cache can be a very helpful thing to improve site performance… we like browser cache, but we need to make it work for us and not against us.

The simple solution is:

<script type="text/javascript" language="javascript" src="blahblah.js?(insert static version # here)"></script>

The version # is actually loaded via a web.config file, but that’s neither here nor there… what is important is that every time we want the browser to reload the cached js file, we simply increment that version # and the browser doesn’t have that src reference cached, so it has to download it.

Cache busted!