3…2…1… planned!

No matter what we do, if we are Agile or fall down the water… if we are senior or junior… if it is big or small… for nearly everything we do we have to define tasks and estimations to plan the days, weeks and months to come. Again, no matter what, this (especially first) planning is in most cases (and from personal experience most means 90%) pretty far from what is really required in the end. The other 10% split up into 1) the ones that planned good but not 100% correct, maybe used “proven” methodologies such as PERT or just estimated +30% and 2) the ones where the planning perfectly fit the development (again in my experience normally 1%-3%).
So, what you could say is to just “do it” like the 1%-3% did. This would normally be the way to go if theirs worked out. The thing is, from everything I have seen in project planning over the years: It just worked because of luck!

I think it is fair to say that I learned project planning pretty much from the practical side, always failing what I learned theoretically. No mater how much time I spend planning big projects, setting up tasks, goals, milestones, reviews, reworks, … it never got into this 1-3 frame.
Even with a more agile-driven approach, small sprints, good daily tasks, weekly reviews and time consuming remodelling of the plan: If I sum up what had to be reworked every single week I was as far away as with the initial waterfall plan. All goals got achieved and “somehow” it worked out but it is disappointing for the one who planned to see his estimations being more a guideline than a workplan.
Based on that experience I started thinking: What are the reasons for such divergence? What am I planning wrong? What do I have to change to fit the developers needs? And that is what strucked me: The Developer!

…to be busy!

With all IT projects I had to work on, the main time is consumed by the developers, the engineers, the architects of the (mostly) software projects. Of course Game Design, Art, etc. have to be taken into account but are often more parallel to what goes wrong more often: The actual development or implementation! (no question, thinking lean everybody should care about downtimes because of unfinished output/input)
As a developer myself that has to plan for others, estimate work, thinking about the production, milestones etc. none of the “theoretical” methodologies really worked out for me but just took my time. And in most cases this time is very limited. Estimations have to be given instantly to evaluate feasibility, plans have to be set-up initially to have a higher model to work and further estimate on. So, time is of the essence not only in the plan itself but also for the time to create it. And if I have to rework it all the time (real-life) I do not want to spend too much time in that phase (no time for building up charts with optimistic, pessimistic and realistic plans…).

…should be enough!

In a coincidence Jake Simpson gave a pretty good impression of this wonderful land, where everything works out. It is known as Should Be Land. This is normally the land where the estimations come from, too. From developers that should estimate their tasks, should give away an idea how long each of it could take to make a plan that also has to tie in with other departments (lean everywhere). If such an estimation fails because “36 hours should be enough!” more often others that depend on you are delayed, too.
Especially inexperienced developers, juniors and fresh “hackers” from the backyard tend to underestimate especially the requirements in correlation with others, to plan interfaces, to build adapters to dock onto others and so on. Nevertheless, seniors aren’t better in general. All people that “program” stuff normally just plan the programming time… and they do not want to plan too much time as the developer is often assessed based on his Cph (Code per hour) output and not based on his quality of code, re-usability, extendibility or tests. The results are in many cases optimistic estimations with no or little time to even plan what you are going to develop.

…am no developer!

Another often misleading planning element is that (many) project managers, scrum masters, gantt-junkies, … do not have the best development background. Therefore, the estimations given are taken as fixed. Experienced managers add an amount of 30% and plan it in. This is unfortunate as even the best estimation cannot just be coped by some time-addition if essential points that are requirements for good development are missing.

One of Two of Three

Besides complicated methodologies or the adding of just 30%-50% of time to an initial estimation given, I split it up into the three tasks I want to see as an output from a developer: The implementation (or coding, hacking, programming, refactoring, …), the planning and the tests!

  • The development is the actual implementation of the task. It may be the creation of a user-system, achievements, tool, crafting, … whatever comes to mind
  • The planning is the structuring of work, the evaluation of patterns, architecture and interfaces to follow during development and precedes it accordingly
  • The testing is no QA process but the personal testing of code, writing of (unit-)tests, maybe even playing the created and succeeds the development

Now, instead of adding a specific amount to a given estimation I add tasks to the estimation. My input is the implementation estimation from a developer. Based on that I add two thirds as planning and one third of that as testing resulting in the three tasks of implementation, planning and testing with a weight of 1/3 of 2/3 of 3/3. For example, if an estimation is 9 hours, I add a task for planning with 6 hours and a task for testing with 2 hours.

Yes, the result is a very keen estimation but the important part for me is that it covers mandatory tasks that are often forgotten and is also able to compensate for possible misjudgement, unforeseen circumstances, … as the package is given as one. The creation of these tasks remind the developer what he “should” do, and the derived estimations compensate for possible problems as well as they fit the real necessity for the other tasks (at least in my experience).
The tasks are important as normally you do not start hacking instantly. To evaluate existing code, interfaces and elaborate what architecture or pattern to use is often more practical and a necessity in general before starting to implement (to think something through before starting programming). To already know what the result should be helps the implementation. And the testing part may be the coders worst nightmare but again a requirement.

The most important point for me is: It’s easy! I can easily derive it in my mind, do have a most-likely accurate estimation (future may prove me wrong ^^) and won’t forget the importance of planning and testing.
If you follow up different approaches the weighting can also be adapted either by mixing tasks or changing the base weight. For example, if you are following a Test-first approach you can either switch the planning and testing tasks, as the testing in TDD also compensates planning partly. Or you can change the base to 4 and plan 1/4 of 3/4 of 4/4 meaning for our example implement 8 hours, test-first for 6 hours and plan for 2 hours (bare with me as I selected easy to calculate estimations).
What base to use depends on personal experience, the project and just the most important gut feeling. For me myself a third for general estimations and a fifth (1/5 of 2/5 of 5/5) for more specific tasks paid out. But all in general split up into my three main tasks I instantly have an estimation ready that fits at least my real-world.

…should work!

Please keep in mind this has no theoretically proven background but my experience over the years experimenting with different approaches and using the methodologies given in literature. Everything depends on your environment and personal likes and dislikes. It “should” work for other instances, too. I used it in several personal standalone and living project estimations and at least for now it was fitting best.
In my environment, with the time given and the amount of work to do this approach works. It is never really off the track, it reminds people about planning and testing besides the actual hacking and helps me to easily keep track about developments without spending too much time in overblown concepts that do not fit my personal habits or the “real” developer.
Of course, there are also drawbacks, such as too little/too much planning. If you split up e.g. User Stories to have tasks such as: Build divideByZero() function; Create class object; Write SQL Statement for querying all users; … you will end up in unnecessary tasks because of the simplicity. In such cases, the User Story “should be” the one to estimate and divide onto the tasks or you reduce the base and introduce a zero/x task.
Therefore, this may not be the 100% 1-3 approach but it fits me best and therefore leads me into that frame more often as the important thing is the variance that fuels this approach… and that can make it work for you, too!

Written for #AltDevBlogADay

What happened to the middle class of gaming?

As time was short this week just a small post but one thing in the last days really intrigued me: Just recently at GDC, Epic‘s own (in)famous Cliff Bleszinski stated: “The Middle class game is dead!“. Now, “Cliffy B.” is known for some polemic statements but if you have a look at the current games in the Top10, being released or being covered in magazines and online portals unfortunately he has a good point. But as I do not agree with Mr. Bleszinski I thought I have to do a little rant on it.

State of the “Art”

To quote what was said:

It needs to either be either an event movie – day one, company field trip, [Battle: Los Angles], we’re there. Avatar – we’re there. The Other Guys starring Will Ferrell and Marky Mark? Nah, I’ll f****** rent that, I don’t really care, right?
Or it has to be an indie firm. Black Swan – I’ll go and see that. I’ll go to The Rialto or I’ll go to the triple-A Imax movie. The middle one is just gone, and I think the same thing has happened to games.

What he does is making a very logical comparison to the movie industry and the people watching movies. And I think we all had that kind of thinking when going to the cinema, at least once.
In general, the movie and the gaming industry underwent a big change over the last years through the advent of fast internet connections, a wider offer of “different” games delivering the same and of course piracy. The movie industry had many problems related to the “new media audience” and was trying to force a new way of thinking onto an old structure… and in most cases failed (not counting things like the iTunes Movie Store, Netflix, … as these are 3rd parties)!

Gaming did so, too! With building restrictive copy protections, enforcing keys for online playing etc. the industry of course tries to protect their property and investment but has also to deal with a not so new audience but its only audience, the media-audience that knows the possibilities and knows how to spread or deal with what they do not intend to deal with (again some “kind of” 3rd parties succeeded such as Steam, providing other games more successful).
But from there, two new branches (re-)opened for gaming: 1) Widen the audience with titles like Just Dance and the NDS or Wii and 2) work with the “independent idea”, doing something special, even odd (being “Sundance“). These also broadened the art of games and the art of developers. But if only Triple-A and Indie developments are really the only thing succeeding the whole Wii line-up probably has to be canned.

If we just take a look at sales, Top10s and media coverage right now everything pretty much underlines what Bleszinski stated:

  • The Top10 is ruled by Killzones, Call of Duties, Fight Night Champions, Bulletstorms, …
  • Call of Duty: Black Ops sold over 20mio units (vgchartz.com)
  • Arkham City, Bioshock: Infinite, Guild Wars 2, Prey 2, … on gaming news sites

Not much variance in what is the main source for information and “opinion” making for 90% of all gamers. I may be a developer but foremost a Gamer! And growing up in a time where every game was Indie or middle class and could get the attention it wanted, this somehow makes me sad. So why is it that there is no middle class in gaming any more?

Today’s Ratings…

You could think that gaming and game reviews follow the rules of the Highlander: There can be only one! Games seem to need a 90+ rating to be sold and they have to be the “definite” thing. Every game has to duel against the genre highlight and nothing is accepted besides. Every MMO has to compete against World of Warcraft, every Action-RPG against Mass Effect.
Now, with this attitude in reviewing games, always coming back to: This was a good game, but Call of Duty has more players online!, how can middle class games even get the attention they deserve? How can a middle class FPS compete against billions of dollars in revenue? Only “Indie Games” seem to have that little bonus given so that even e.g. a game as Magicka (great game) often just got a 7 out of 10 but is a success (a death sentence to other games). But wait? Magicka was known and well sold already before reviews with great ratings!?

…and the perception?

Reviewers and PR often argument that they have to force the 90+ and advertise the hell out of it just to provide a “deceiving perception” that it is the next great game and needs to be bought. Now, if you just read the comments below reviews and the comments to In-Game Videos that may show a little lag you could think those arguments are correct. And if sales such as Call of Duty: Black Ops pop up everything is signed that “their way” is right.
Now, everyone wants to do that “one bullet”, wants to be the next Call of Duty or WoW (not even Blizzard can do another WoW!). But really besides those what sells are a Pokemon, a Mario, a Just Dance, are Kinect and their Adventures, and the many other (not only Wii) titles that everyone watches but somehow nobody really sees. These are not necessarily games that are labelled triple-A or are covered throughout the gaming sites all day long (up to the release I didn’t even know about a new Pokemon, but I like Black&White ^^), but nevertheless they are sold, are in Top10 charts and are even fun to play. I will skip that in some purely economically driven stock corporations these numbers seem to not count.

Media Coverage

How games are covered in magazines and on online sites is very important nowadays. 100 Action-Adventures against 100 First-Person-Shooters against 100 whatever games compete for the money of the gamer. It is not enough any more to release some footage near release. You have to be in the gamers mind for months before the release. You need to be watched to be recognized, you need to waited for to be a “seller”. But did people really waited for the Top10 game Just Dance 2?

The recognition long before the release is produced with “unreal” information. Costly Render trailers provide a visual media entertainment over the real gaming. Produced parallel to games, sometimes from external studios they try to establish a name in the gamers mind. Games like Deus Ex: Human Revolution were noted for nearly a whole year just because of an impressive render trailer. It is just now that some people start to report from real playing sessions. If these would be anywhere near mediocre the game would be criticized prior release because of faked expectations. In this particular case even two expectations: Because of its predecessor and really incredible trailer. If I think about the Video Game Awards last year and the trailers I could start thinking that gaming is not important any more as I haven’t seen much gameplay… the important part in the end!
A different game and according media coverage is Limbo. Videos, Previews, Screenshots etc. always showed gameplay and people got intrigued by the game, by the art, by the style and not a “produced emotion” but what they felt themselves watching the real game. Now, I am always thinking: If they only show a render trailer the game probably isn’t that intriguing being watched… or is it?

…and the perception?

Basically you could say: WYSIWYG! We can only go out and buy what we know of, what we see. Therefore, PR and Marketing, the coverage in the media and the outcome as a rating is so important to be recognized and a possible subtract for the buyers money. That is limited and more and more players try to get a share out of it. Once the music industry complained that piracy is destroying the revenues. No question, piracy is bad but an important point is missing: The every growing entertainment battles for the same share! New technologies, more games, more movies, … broaden what has to be consumed and therefore fight for the right to be noticed.
I remember a time where I would pick up a gaming magazine at my local store (yes, a printed one!) and would have an overview about every single game to be released in that particular month and nearly everything in development. The market was growing but manageable and you would be able to keep an overview of everything interesting. Nevertheless, we still remember the time and those “glory days” of gaming. Games such as Outcast or Might and Magic VII keep me on my machine still today (just bought them at GOG).
Nowadays, such games would hardly been noticed if not covered exclusively or they would not stand comparisons against “that one” genre defining game.

The hidden Middle Class

A problem as always is the definition of “middle class” and especially “indie”. Middle class is not necessarily A- or AA-Games and Indie does not mean: One guy sitting in its room developing the next extraordinary gaming evolution.
Of course, games such as Braid or World of Goo with it’s extremely small teams are top-notch productions and extremely great games. These are also often used as a definition of Indie games. Besides providing an interesting game design and gaming twist both games are extremely polished, with sometimes incredible graphics and beautiful music. So, besides an interesting game design these games also provided incredible art and music, often only related to so-called Triple-A games.
But besides these two examples what about others? For example the developer of the fantastic ‘Splosion Man Twisted Pixel is no “Two and a half Men” team but a team of about ten doing a high production value, extraordinary game with good design and a special twist. The new developer Adhesive Games just showed of their premier title Hawken which was so impressive that Kotaku labelled it as the most beautiful indie game. But a giant city, mech action game with an impressive graphics style and city view, an indie game? Introversion coming from Indie “heritage” being middle class nowadays. This is also a good example of what indie development becomes after that “one hit”: Middle class! You cannot stay “indie” if you are noticed and people follow what you are doing and especially starting building up expectations.

All these and even more such as contract developers (e.g. Shin’en) are the middle class no one notices. This “hidden” middle class is what provides the foundation for our gaming, for our everyday entertainment. As well as a movie such as Avatar gaming cannot and will not consist of only triple-A high budget productions. Many try to achieve just that: A triple-A 90+ international seller every year! But from a real economic viewpoint this is nonsense. You neither plan with just one horse nor do you found on one pillar. Sometimes productions have to level each other because success cannot be planned, especially in entertainment. If you have bad luck and your extremely awesome military shooter comes out right the week after a catastrophe, your game is doomed. The movie industry knows this for years and paid for it. These errors should not be repeated.
And from my own experience, another player, the browser games are not developed by some “PHP script kids” any more but productions with larger teams trying to increase the production value to a level with standalone games. Right now probably the widest middle class games everyone plays, but nobody knows.

No question, I love the so-called AAA-Titles such as Uncharted, Gears of War or Killzone. But to me the middle class definitely is not dead. It again depends on how we look at it, how we rate it (without prejudices) and how we classify “The Other Guys” in gaming such as downloadable games, “indies” and online games (as well as my infamous Casual Games). Because we have to remember even a company such as Epic started of as a middle class developer with Indie developers. “Cliffy B.” may be right for the games he intends to create, but The middle class game is not dead in general! Or would you label EVE Online as middle class? Just based on money or CUO you would have to if you think like Cliff!

PS: A polemic assumption by me: If the general gamer has 100$ each month, he would buy more games if the costs are lower and therefore a wider production would be more effective and risk-less!

Written for #AltDevBlogADay