Unit test structure guidelines

I have recently started to structure my unit tests differently and have to share. Prior to this new convention, things were messy. Let’s look at an example.

Let’s say there’s a class you need to test called WidgetController. This controller is supposed to be responsible for inserting, updating, and deleting Widgets. It has those 3 actions on it:

public class WidgetController
{
    public void Insert(Widget widget)
    {
        // Do something
    }
    public void Update(Widget widget)
    {
        // Do something
    }
    public void Delete(int widgetId)
    {
        // Do something
    }
}

Without the new method of unit test structure, I might have tested it this way:

[TestClass]
public class WidgetControllerTests
{
    [TestMethod]
    public void when_inserting_someassertion()
    {
    }

    [TestMethod]
    public void when_updating_someassertion()
    {
    }

    [TestMethod]
    public void when_deleting_someassertion()
    {
    }
}

This has the benefit of only using one class for all of the WidgetController tests, but this class can quickly grow. The new method I use:

public class WidgetControllerTests
{
    [TestClass]
    public class InsertTests
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }

    [TestClass]
    public class UpdateTests
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }

    [TestClass]
    public class DeleteTests()
    {
        [TestMethod]
        public void someassertion()
        {
        }
    }
}

With one assertion made, this looks cumbersome, but imagine when you have 15 different mini assertions for an insert test. Perhaps your business rules are fairly complex, and you need to verify different aspects through many different unit tests. This gives you the structure and separation to really get some good organization for your tests. The test UI and any test result reporting suites will be able to break your tests down by subclass, giving you a much better idea of what’s going on.

10 reasons to use TDD (Test Driven Development)

  1. It will help you improve your OOD

    Proper object oriented design is the key to writing good extendable, maintainable, and stable software. If you pile too much functionality into one big class or one big method, you’re just asking for trouble. TDD makes it easier to adhere to the SRP (Single Responsibility Principle) by encouraging you to create smaller classes with less functionality.

  2. Get it in front of your users faster

    Designing your classes to inherently rely on abstract (mockable) dependencies is a sure way to get on the fast track to building a demo. For example, in a database driven application, a mock data layer can be substituted with generated data to be consumed by the front end. Though somewhat unorthodox, mocking frameworks can work just as well to fake a back end system in a running application as they can for unit test mocking! In software development, getting the product in front of your users can be the most important step, because it gives them a chance to change requirements early when it’s less painful.

  3. Good coverage

    TDD will have you writing a test for every bit of functionality you are coding. You are automatically forcing yourself to have a high code coverage metric if you stick to the cadence: Write breaking test, fix test, refactor.

  4. Quickly verify all functionality

    When refactoring code, it is very useful to be able to quickly verify all existing functionality in an instant. This isn’t necessarily a benefit of TDD itself, but rather having good unit test coverage of business rules. Sometimes in an emergency situation (e.g. an outage due to code bug), we are forced to cut corners and not necessarily write a test first. Having that safety net of unit tests there is a huge confidence booster. Naturally by developing with TDD, you are building a safety net as you go!

  5. It forces you to rely only on abstractions (Dependency Inversion Principle).

    You wouldn’t solder a lamp directly to the electrical wiring in a wall, would you?

    The dependency inversion principle is one of the 5 SOLID principles of object oriented design. It states that when designing a class, any and all of the other classes that are used should be via abstractions. That is to say, a class should not reference another concrete type. When done correctly, test driven development encourages adherence to this principle, because you will always need something to mock when writing a test.

  6. Smaller problems are easier to solve.

    (2 + 2) / 2 – 2 * 6 = ?

    If you understand the order of operations (if you’re here, I’m sure you do), then your brain automatically broke this equation down into solvable parts. Likely, you figured out that 2+2 is 4, and 2 * 6 is 12, so the equation became 4/2 – 12. Then you might just solve 4/2 and finish out with -10. The point is that you broke the larger problem down into smaller chunks, because that’s the easiest way to get to the answer. (Bonus points if you can just look at equations like that and spit out an answer!). Any programmer worth their salt isn’t going to attack a large application by writing one big blob of code. They’re going to understand what the customer wants, break it down into pieces, and build those pieces to fit together for the larger system. TDD is a great way to do just that without completely understanding the big picture immediately.

  7. It feels really good

    I’ve done quite a few projects with TDD now. The first time it feels strange, like it can’t possibly work. You toil for a few days or weeks on writing these individual little bits, solving little tiny problems as you go. The larger problem is not necessarily in your brain the entire time, so it feels foreign. Finally, when it comes time to make a demo, you get to connect all the little pieces, which almost always involves an IoC container for me. This is a very satisfying process and brings me a lot of joy.

  8. Opens up the path for future testing

    This is a topic I have talked about at length. Some may not see the value in this immediately, but I find this extremely important. Simply by following the TDD pattern, you are ensuring future testability of your classes. No one writes bug-free code every time. I can’t tell you how many times I have been seriously happy when it comes time to fix a bug in code that I’ve used TDD with. I come back to find that in order to reproduce the bug, I just have to provide a very specific mock up in a new unit test. The path was laid by me in the past, and now it is super easy to prove that the bug is fixed by fixing a failed unit test.

  9. Stops analysis paralysis

    From Wikipedia:

    Analysis paralysis or paralysis of analysis is an anti-pattern, the state of over-analyzing (or over-thinking) a situation so that a decision or action is never taken, in effect paralyzing the outcome.

    Sure, any new application needs some analysis, but when the above happens, nothing gets done. TDD allows you to get started right away by solving small problems immediately. Sometimes, the bigger picture starts to come together when you start chipping away at the little bits.

  10. Slow is smooth, and smooth is fast

    This is an old saying applied to targeting a firearm. The saying explains that if you move too fast, you’re going to make a mistake and fail. I believe the same saying can be applied to software development as well. The argument against TDD and unit tests in general that I’ve heard in the past is that they slow you down. It’s a natural thought to have: I can either start writing code to solve the problem, or start writing code to test non-existent code, and then write the same code to solve the problem anyway.

    WRONG!

    This argument infuriates me, because it typically comes from someone of power who is trying to justify cutting corners. Sure, if you put two people on solving the same complex problem, one with TDD and one hacking directly towards a solution, the latter is going to finish quicker, but that’s just not a real life scenario long term. With any given application, someone is going to need changes. While TDD might take longer in the initial phases, it pays dividends extremely quickly when changes start pouring in. The class design is decoupled, class responsibilities are seriously limited, so requirements changes very rarely actually mean changing working code! Instead, it is safer and quicker to extend and write new code. Less bugs are created, and new features can be added a lot quicker in the long run.

Spriter implementation for FlatRedBall

For over the past year, I have been working on an API and plugin to use in the FlatRedBall engine that makes it dead simple to load, play, and manipulate Spriter animations in your FlatRedBall games. The implementation is written as an extension to the FlatRedBall engine, so you get all the goodness that comes from using first-class objects that the engine understands.

A few features that other Spriter implementations may not have:

  • Positioning in 3D space
  • Scaling the entire animation
  • Setting animation speed
  • Reversing animation playback (negative speed)
  • Rotating an animation on any axis (X/Y/Z)
  • Cloning animations
  • Playing every entity in an animation file simultaneously via SpriterObjectCollection
    • Every one of the features above works on SpriterObjectCollection as well

This is just a subset of the features in my Spriter implementation. If you are interested, install the FlatRedBall development kit, and head over to the releases section to get the latest release of my plugin. It’s a simple installation process into the Glue tool you get in FRBDK. Follow the tutorials, and you’ll be creating and using Spriter animations in a real game in no time!

Why using a generic repository pattern in Entity Framework still works!

I have had lengthy conversations on the topic of using a generic repository pattern on top of Entity Framework (I’m kainazzzo). I believe that I am probably in a minority of developers that think it’s a perfectly acceptable practice. Here are some of the arguments people make about using IRepository<T>:

  1. It couples your class directly to Entity Framework, making it difficult to switch
  2. There is no value-add using a generic repository over top of EF, because it is already a Repository pattern
    • i.e. DbSet<Entity> is the Repository, and DbContext is the UnitOfWork
  3. It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code
    • Or that in general, IQueryable should not be returned from a repository, because deferred queries are leaky

I will address these points directly.

It couples your class directly to Entity Framework, making it difficult to switch

The last time I changed data access in running production code was never. There would have to be an astronomically large gulf in the functionality between two ORM frameworks for this to even be a consideration. Once code is in production and being used by real live people, any change is risk. Risk is not taken unless the change is deemed to be of some value. Simply changing ORM or data access frameworks would be a tough sell to any business unit. Once you choose Entity Framework, you will not change to NHibernate. It is just not going to happen, so this is an difficult argument to make in my opinion. If you really have a problem with it, you can still do a higher level implementation on top of EFRepository that abstracts away the rest of the EF bits.

There is no value-add using a generic repository over top of EF, because it is already a Repository pattern

BOLOGNA SANDWICH. Seriously… now I’m hungry for processed meat.

What I mean is that this is preposterous. Take my IRepository<T> class for example:

public interface IRepository<T>
    {
        void InsertOrUpdate(T entity);
        void Remove(T entity);
        IQueryable<T> Find(Expression<Func<T, bool>> predicate);
        IQueryable<T> FindAll();
        T First(Expression<Func<T, bool>> predicate);
        T FirstOrDefault(Expression<Func<T, bool>> predicate);
    }

And the EF implementation:

    public class EFRepository<T> : IRepository<T>
        where T : class, IEntity
    {
        private readonly IUnitOfWork<T> _unitOfWork;
        private readonly DbSet<T> _dbSet;

        public EFRepository(IUnitOfWork<T> unitOfWork)
        {
            _unitOfWork = unitOfWork;
            _dbSet = _unitOfWork.DbSet;
        }

        public void InsertOrUpdate(T entity)
        {
            if (entity.Id != default(int))
            {
                if (_unitOfWork.Entry(entity).State == EntityState.Detached)
                {
                    _dbSet.Add(entity);
                }
                _unitOfWork.Entry(entity).State = EntityState.Modified;
            }
            else
            {
                _dbSet.Add(entity);
                _unitOfWork.DbSet.Add(entity);
            }
        }

        public void Remove(T entity)
        {
            _dbSet.Remove(entity);
        }

        public IQueryable<T> Find(Expression<Func<T, bool>> predicate)
        {
            return FindAll().Where(predicate);
        }

        public IQueryable<T> FindAll()
        {
            return _dbSet;
        }

        public T First(Expression<Func<T, bool>> predicate)
        {
            return FindAll().First(predicate);
        }

        public T FirstOrDefault(Expression<Func<T, bool>> predicate)
        {
            return FindAll().FirstOrDefault(predicate);
        }
    }

How many times have you looked up how to do “Insert or Update” in EF? I don’t have to do that any more. I know my repository does it well, and all I have to do is make my entities implement IEntity, which simply ensures that an Id field exists. This is not a problem with DB first implementations, since EF generates classes as partials. My value-add comes from EFRepository implementing basic Id checking in order to initiate a proper update in EF.

Then, there is unit testing. Some people would argue the value-add of unit tests, but I see unit tests as a priceless artifact one puts on display and protects. They are a window into your code and how it is supposed to function, and they can be a huge safety net. See my post on TDD for more information. Let’s say you were developing a game data editor and you had a class that relied on loading Enemies (EnemyEditor), perhaps one of the simple requirements was that when loading enemies (GetEnemies()), the list of Abilities was not null, even if the database provided it as such:

[TestMethod]
public void enemies_have_non_null_ability_list()
{
    var container = new MockingContainer<EnemyEditor>();
    container.Arrange<IRepository<Enemy>>(r => r.FindAll()).Returns(
        new List<Enemy>
        {
            new Enemy
            {
                Name = "Enemy1",
                Abilities = null
            }
        });

    var enemies = container.Instance.GetEnemies();

    Assert.IsNotNull(enemies[0].Abilities);
}

This unit test is for a very small requirement, and the arrangement of what IRepository<Enemy> returned was extremely easy to write (if you’re familiar with mocking). You didn’t have to jump through any hoops to mock up the DbSet within the DbContext by creating IDbContext or anything, which admittedly is not impossible, but it isn’t entirely intuitive, all apt alliteration aside.

So there is value in layering on top of Entity Framework with a generic repository pattern. The value is in InsertOrUpdate, as well as simplified unit test mocking.

It causes “leaky” abstraction, encouraging .Include() calls to be made in your business layer code

Given the value that it adds, I am totally alright with this. Yes, IQueryable.Include “leaks” implementation details through the abstraction layer, because .Include() only exists in Entity Framework; however, given that I already stated the fact that the ORM will never change, I am alright with making a conscious decision NOT to abstract away the ORM. I am not abstracting away Entity Framework… it is already an abstraction of data access that I am comfortable wtih. I am actually EXTENDING Entity Framework by adding a generic repository pattern on top of it. It is much simpler to deal with IRepository<T> and IUnitOfWork<T> than to have some GameEntities override that mocks the DbSet objects inside. Also, I will say it again: you can still use the generic repository pattern as a starting point, and further abstract away the EF specific bits by using a more domain-centric data access object.

It is my experienced opinion that software development is always about trade-offs. There is never a one-size-fits-all solution, but there are patterns that get us as far as 99% of the way there. It’s up to us as software craftspeople to massage and mold code into a product that works and is not a burden to maintain.

I leave you with two quotes from Bruce Lee:

“Don’t think, Feel, it is like a finger pointing out to the moon, don’t concentrate on the finger or you will miss all that heavenly glory.”

To me, coming up with reasons not to do the most obvious, simply because it is not exactly like you think it should be is like looking at the finger.

“All fixed set patterns are incapable of adaptability or pliability. The truth is outside of all fixed patterns.”

Without adapting the known repository pattern to fit the Entity Framework appropriately, we limit ourselves to something that is not ideal for the given circumstance.

 

Test Driven Development (TDD) vs. Traditional Testing

TDD can get you pretty far, but integration testing is necessary.

The subject of this post is a bit of a misnomer, because the two are not mutually exclusive. That is to say, test driven development is not a replacement for testing. In fact, test driven development has less to do about testing than it does about design. TDD should drive your class design in such a way that makes it easier to get to the real testing phase. There are certain cases that are not going to be apparent during the initial class design, so when developing any application, testing should begin as soon as possible. TDD Can get you there faster, because a lot of the pieces of the application can be faked since the design is testable!

Anything that gets mocked in a unit test can be faked in a built and deployed application, so business users, UX specialists, and designers will get a chance to play with the app to tweak requirements very early in the process! This early change is a lot less risky than late change when an application is “done” because the whole system is less complex.

Take for example my most recent project for which I am using TDD. It is a gesture library for desktop games using XNA (not windows 8 store games). I created a GestureProvider object which relied on an ITouchEventProvider interface to receive raw TouchEvent objects and return GestureSample objects. Using just four conceived objects, I wrote some simple tests that would prove Tap, Drag, and Pinch gestures could be detected given the proper touch events.

The tap test went something like…

[TestMethod]  
public void single_tap_registers_from_one_touch()  
{  
    // Given a mock ITouchEventProvider that returns the following
    _container  
        .Arrange<ITouchEventProvider>(p => p.Events)  
        .Returns(new List  
        {  
            new TouchEvent  //    touch down at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Down,  
                TimeStamp = DateTime.Now  
            },   
            new TouchEvent  //    touch up at 0,0
            {  
                Id = 1,  
                Position = Vector2.Zero,  
                Action = TouchEvent.TouchEventAction.Up,  
                TimeStamp = DateTime.Now.AddMilliseconds(200.0)  
            }  
        });  

    var gestureProvider = _container.Instance;  

    // Get gestures from the real GestureProvider object (system under test or SUT)
    var samples = gestureProvider.GetSamples();  

    // Assert that there is one GestureSample object for a Tap at 0,0 
    var gestureSamples = samples.ToList();  
    Assert.AreEqual(1, gestureSamples.Count);  

    var tap = gestureSamples[0];  
    Assert.AreEqual(Vector2.Zero, tap.Delta);  
    Assert.AreEqual(Vector2.Zero, tap.Delta2);  
    Assert.AreEqual(Vector2.Zero, tap.Position);  
}

I did that test, and another for Drag and Pinch. Everything seemed to be going so well that I wanted to test them out, because I had a sneaking suspicion that I was missing something. I wrote up a quick test for a real ITouchEventProvider implementation that would use an interop library to listen for events, and provide them to the GestureProvider. I fired up a real game and added the necessary code to use the GestureProvider. I noticed one thing right away: Tap was not registering as a tap, but instead it was a drag. I double checked my tests, and it all looked ok, so I had to debug a bit. Eventually I found that my assumption about what events would fire for a tap was flawed. There could be any number of “move” events between “down” and “up”. I made the quick fix to add one move event to the test arrangement and fixed the GestureProvider so that the test passed, and then it worked. This proves that integration testing is a very important step in any system.

My unit test alone did not make the whole system work, but via TDD, I had a designed the classes such that there was a clear path to fix the test so that it satisfied the real-world scenario instead of the erroneous assumption I made. Chalk up another TDD win!

Test driven development (TDD) should drive your class design (Part 1 of 2): The wrong way

Consider the following scenario. You work for Company A, and they want you to write a simple web portal for them. You learn that there is a process in place already that spits out about 30 images as email attachments, and they would much rather see these displayed on a web site. The images will have to be categorized into main, sub, and chart type. Sounds simple enough, right?

I want to attack this problem from two angles to show how test driven development can, and should, drive your class design. Again, TDD should drive your class design. Simply put: if you do it right, TDD should cause you to actually change how you design your classes and their dependencies into a much looser, flexible, and less brittle pattern.

First, let’s look at it from how one might write this without TDD. This theoretical person is still good with object oriented programming, but hasn’t quite made the leap to TDD. Maybe you fit this bill? Maybe you’ve said “I don’t even know what this is going to look like, so how would I write a unit test for it before it’s written??” to yourself in the past? Let’s get started.

Obviously from this example, one can imagine a Chart object, and perhaps a ChartCategory object:

    public class Chart
    {
        public string FileName { get; set; }
        public ChartCategory Category { get; set; }
    }

    public class ChartCategory
    {
        public string Main { get; set; }
        public string Sub { get; set; }
        public string Type { get; set; }
    }

One might then surmise that they need a way to load charts from disk:

public class ChartReader
    {
        public IEnumerable LoadChartsFromDisk(string path)
        {
            var files = System.IO.Directory.GetFiles(path);

            var charts = files.Select(f => new Chart
            {
                FileName = f
            }).ToList();

            foreach (var chart in charts)
            {
                chart.Category = GetCategory(chart, path);
            }

            return charts;
        }

        private ChartCategory GetCategory(Chart chart, string path)
        {
            var doc = XDocument.Load(path + "\charts.xml");
            var chartElement = doc.Descendants("chart").FirstOrDefault(x => x.Attribute("filename").Value == chart.FileName);
            if (chartElement != null)
            {
                return new ChartCategory
                {
                    Main = chartElement.Attribute("maincategory").Value,
                    Sub = chartElement.Attribute("subcategory").Value,
                    Type = chartElement.Attribute("charttype").Value
                };
            }
            return new ChartCategory();
        }
    }

GREAT! You even had the foresight to load categories from an XML document. This is going to work well, right? Let’s say you wire this all together and make a great front end website for them (I’m going to skip this part since it’s not all that relevant right now). You have your charts displaying on page, and business likes how it looks. You get a pat on the back for getting this done in a few days.

Now they want to add new features. They want charts to have a description. Come to find out, charts are only ever categorized into two main categories, and two static chart types, and they always want one chart type on the left of the page, and the other on the right. You make these changes. Later down the line, someone comes back and says well there could be a different chart type, and this chart type is actually a grouping of charts that, as a group, have to be shown together.

The point I’m trying to make is that this code is brittle and untestable. One guaranteed point in software development is that the requirements will change over time, and the later in the process you get change, the riskier it is.

Why is this code untestable?

Well, technically it’s not impossible to test this code, but let’s try writing a simple unit test. One that proves a chart gets its category from the xml document:

        [TestMethod]
        public void charts_get_their_categories_from_xml_documents()
        {
            // Arrange
            var reader = new ChartReader();

            // Act
            var charts = reader.LoadChartsFromDisk("c:\unittests").ToList();

            // Assert
            Assert.AreEqual("main", charts.First().Category.Main);
            Assert.AreEqual("sub", charts.First().Category.Main);
            Assert.AreEqual("type", charts.First().Category.Main);
        }

Halfway through this test, it becomes apparent that the ChartReader object has 2 major dependencies: the file system, and the xml document. That is to say, in order to pass this test, the C:unittests folder will have to contain image files and an xml document to simulate the assertions. This causes a few problems:

  1. Disk IO is slow
  2. This test is very brittle, as the arrange section is not really arranging anything.
    1. The test parameters are setup on disk and are easy to modify by anyone
  3. In order to deploy this to any type of build server that runs unit tests, the same filesystem dependencies need to be met.

While this class design got the job done, I hope you can see that it is not ideal, but how can TDD help? Click here to find out!

Unit testing with mocks

Recently I have really gotten into Test Driven Development (TDD). My first real try at TDD was for a library that I have yet to complete for integrating spriter animations into FlatRedBall. Now I am working on something work related, so I can’t give any specific details as it is proprietary information. I can, however, come up with some contrived examples that may be beneficial to the community regarding Mocking.

There are several different mocking frameworks out there, and I started using Moq first, but I have changed to using Telerik JustMock because it has a less verbose syntax. I may post a “differences between Moq and JustMock” some day, but others have already covered that on the web, so it really isn’t necessary.

I also find great value in dependency injection, so IoC containers are invaluable in my production code; however, something I have also recently learned is that they can be just as invaluable in a unit test. Implementing the dependency injection pattern enables techniques such as inversion of control (IoC), which allow specifying concrete dependencies at runtime for abstract dependencies. That is to say, if class A relies on interface I, and B implements I, class C associates A with B at runtime without either class knowing of the other. Example:

public class Foo
{
    private IWorker _iworker;
    public Foo(IWorker iworker)
    {
        _iworker = iworker;
    }

    public string DoWork()
    {
        return _iworker.DoWork();
    }
}

public interface IWorker
{
    string DoWork();
}

public class Bar : IWorker
{
    public string DoWork()
    {
        return "Bar";
    }
}

We have two classes. Foo which has a dependency on an abstraction (interface IWorker). You can see that Foo doesn’t have any reference to class Bar in this example, and the reverse is true. Bar doesn’t know anything about Foo.

One might use this class in the following manner:

// Manually injecting the dependency through a constructor
var foo = new Foo(new Bar());

// or by Automatically injecting the dependency by setting up an IoC container
var container = new UnityContainer();
container.RegisterType<IWorker, Bar>();

// Typically the below step is done by a dependency resolver which depends on a
// container of some sort, but for a concise example:
Foo foo = container.Resolve<Foo>();

I’ll let the comments explain the code. Suffice it to say that the second pattern is much nicer, because a third class is responsible for coupling all of your dependencies. In MVC, this is typically a BootStrapper static class, but it can be anything that sets the MVC configuration’s global DependencyResolver object.

Why should you use this pattern in your class design? Unit tests! Unit tests! Unit tests!

Example:

[TestMethod]
public void a_dowork_returns_dowork_from_IWorker()
{
    // Arrange
    var container = new MockingContainer(); 
    container.Arrange(i => i.DoWork()).Returns("test");

    // Act 
    Foo foo = container.Instance;
    var result = foo.DoWork();

    // Assert
    Assert.AreEqual("test", result);
}

Mind blown? There’s a lot going on here, but essentially:

Arrange:
The unit test takes advantage of the dependency injection pattern and injects a proxy implementation of the IWorker interface into the Foo object it maintains inside the MockingContainer. The call to Arrange is saying that this proxy object should return “test” for any calls to its DoWork function. So we have an object that implements IWorker (it has nothing to do with class Bar), and we inject that into the instance of Foo automatically with the container.

Act:
We then call DoWork on the instance of Foo that the container provides us (which has the dependency injected already)

Assert:
We assert that the result returned from foo.DoWork() matches what the proxy DoWork was returning.

This is a simple contrived example, but it illustrates how IoC containers work in tandem with mocking very well. I hope someone finds this useful!

FlatRedBall game engine

In speaking with Joel Martinez about my most recent game project, we got to the point of talking about finishing a game, and he mentioned something about animation. A friend of mine had developed part of a game engine I had always planned to use which had a pretty intuitive and well laid out library. When I mentioned that to him, he pointed me toward FlatRedBall.com.

I was blown away.

This game engine / framework has it all:

At first I was feeling a bit like using this type of thing would mean that I was giving up something, like I had lost that “cool” factor or some form of virtual street cred. Developing with flatredball would be nothing more than using some tool where you drag and drop a bunch of things and make a game and claim you’re an indie dev; however, I’ve been reading the wiki a little bit here and there today, and I have to say I am over that hump… Joel said it best:

… I felt exactly like that in 2010 … then I got over it and released 3 games last year 🙂

Ok… I’m sold. now to start playing, or finding the time to play!

Update: I’m having a bit of trouble creating a project with “Glue” at the moment, so we’ll see how it pans out.