Category Archives: Code

CDI – DIY – Manual Injection

Enterprise development can be good and bad: It may support you with great managed environments, awesome conventions and configuration possibilities. But sometimes it also forces you into too tight railings, thinking it would be for your best, having you answering: “Wait… why… WTF?”

Unmanaged Dependency Injection

One trend especially jumps between brilliance and madness: Dependency Injection! To feature a common, broad context management supporting your needs with simple references, view and scope management is something very intriguing. But beware if you only peak out the window for once… it will immediately drop all its support for you and your code.

A managed environment can be supportive in many ways and eases our every day life but often misses the scope of Software Engineering and its possible uses. This leads to limitations or horrible inter-configuration-setup-libraries architecture that no one can manage but god himself.

Manual Dependency Injection

Therefore, it might happen that you have to jump between e.g. two components of managed and unmanaged source, in desperate need to access the managed environment. For that you can manually check the registered BeanManager and grab your managed resources yourself.

By checking the local context and grabbing the BeanManager you have the foundation to fiddle around and grasp of the managed entities.

With the BeanManager you are now able to look up the managed beans by class or name and iterate over what you are looking for. Below you can find an example how to look for the first managed bean of a given class, if any.

Of course, I would always encourage to stay as managed as possible, if you develop in such an environment. But especially if you have to deal with legacy modules, or dedicate multi-purpose components (which I had to), this solution encapsulated in a utility helper can save you many brain cells and long evenings spending with real development ^^’.

Big thanks to Dominik Dorn for this life saving hint.

Under Pressure

The issue tracker is overflowing and the deadline is inexorably coming near: The milestone 4 build has to be reached! Feature-set B15 has to be fully implemented and needs to be QA approved but bugs still occur and some features haven’t even been worked on. Everything needs to be crunched in there somehow as bug fixing is not limited by the announced feature freeze… and so it happens that you go into overtime!

Because of actual events in my last weeks and months this topic just pops up again and again with me: Crunch and Overtime! Nowadays, these are even accepted as “normal” in not only Games but general IT and Development. I know only few other industries and departments that take crunch-time for granted… especially in the end of any project.

As soon as overtime happens it is already too late. As no project manager (should) plans with crunch-time something went wrong if it happens anyway. In some cases this is not necessarily bad. Most people do what they do, work at what they work because they like the challenge, they like the environment… they just like what they do.
But no matter how much you love your work, after 12, 18, 24 hours day after day after day no Red Bull nor a single good night sleep can help to keep you really focussed and up to the task that you are actually on.

I do not want to go into detail why something like overtime happens but there are some things professionally and socially that I observed over the last years and especially months I want to share if you have to crunch to release a feature in dependency with others!

Documentation vs. Communication (or “State the obvious”)

Pressure pushing down on me
Pressing down on you no man ask for

It may be so easy: You get your Game Design, your Technical Design, Interfaces, Standards etc. defined and start developing from top to bottom. In the end everything works out, interconnected and your task is finished. Great!
This perfect world is pretty rare and in most cases does not reflect the “real life“. In most cases many things have to be reworked or clarified and therefore communication socially and professionally is one of the most important factors when it comes to development in larger teams.
Nevertheless, especially after 12, 14 hours of work or during a night the receptivity starts to lack the focus it needs for intense communication and dialogues. People start starring at their displays trying to get around that one oddness or gaze into the coffee/energy drink creeping over the floor. People that normally question anything start developing “till the end” and not “to finish a task successfully” meaning they “crunch” all what is left into their current objective, finish it up as quickly as possible top to bottom based on the docs… and as clarification takes time if the design itself can also be interpreted in a specific implementation kind of way: It will be!
So, in the end of any project, after many hours of work, during nights etc. try and start being pro-active: If you crunch with others, state the obvious! If you do overtime yourself, start questioning the most simple things! This may sound annoying but is most important as the most well-formed process is nothing worth after four+ weeks of crunching. Normal things like “Did you add the graphics of that item?” or “Have you added the i18n key?” are the first things that get lost as soon as a narrowed mind is focussing on fixing a bug or finishing up a feature.

Crunch in Overtime (or “The right Task at the right Time”)

Insanity laughs under pressure we’re cracking
Can’t we give ourselves one more chance

Don’t get me wrong: Sometimes overtime can be very healthy for a project and team if e.g. a small group of people focus on one small feature-set altogether and try to reach a goal in a given time frame. Tasks get crunched, time just passes by and everybody is happy (with some pizza and beer of course this can be a wonderful achievement).
Nonetheless, very often overtime is used or has to be used to finish up tasks that are unfinished or even untouched. This leads to crunching in all the different tasks that just have to be done before a milestone or deadline is reached. So, the overtime is used to clear out the issue tracker and not to finish what the main goal was.
If overtime happens use it wisely and plan what to do! You are not in your right state of mind after hours and hours of coding, drawing, layouting, … and deprivation of sleep can lead to similar effects as alcohol e.g. headache or dizziness. The efficiency may seem increased after some energy drinks but based on experience and code review… it is not! You cannot state a number but if the efficiency and focus is decreased, plan in some laborious work, some monotonic tasks, clean up and work off method sets etc. Complete new structures, concept arts (depends on the crazy creativity ^^), calculations or templates especially interfacing with others (see above) are detrimental. Crunching has to be planned and should not just occur!

Social Competence (or “To Develop is Human”)

Watching some good friends
Screaming ‘Let me out’

During daytime everybody is calm, touched by the sun, always having a smile on their faces. But after 15 hours from dawn till dusk the smile starts to vanish from their faces as the sun sets.
It is no matter how “nice” somebody is during the day: during overtime and crunching tasks every mood starts to swing. If set under pressure over weeks, sleepless for days and crunching code into a machine people get nervous and tetchy.
Now it is important to be sensitive. Not only developers, artists, … in-between but also a managing director has to apply his best soft-skills and pressurize focussed but appreciative. Even ironic jokes that would cheer up anybody during daytime can break loose hell if people spend 20 hours working on one bug! This emotional intelligence is a major issue when it comes to delegated work. Nobody intentionally tries to not finish any task so do not sound like this.
To loosen up a little and see the crunching time as a task of the team. Do not take it too serious… it is more important to sometimes just take a walk and have a little water cooler talk. I am a non-smoker but if it is getting dark it can be helpful to just go with the crowd and to keep together. Share thoughts, introduce pair-programming (if not already given, 200% more effective during overtime in my opinion) and try to help each other as together the longest nights can become the best stories for the next day.

Stay Focussed (or “Utopia is nowhere near”)

It’s the terror of knowing
What this world is about

In 90% of cases overtime and crunch-time happens because a goal has to be reached in time. A milestone, a release build, … whatever. Unfortunately, during crunch-time it often occurs that some people see this time as “additional” hours to use (see above). They try to achieve 200% and not to reach bug-free 100%. Such ideas come from management/directors but also from developers that tend to pressurize themselves. If they do not get to see their bed for days at least this time has to pay out.
Always be realistic about what the goal is and try to not loose focus of what can be achieved during this overtime. As stated above crunching should be planned and therefore plan against the origin of that specific overtime. If people are under pressure it is more important to eliminate all mush and narrow down what you want to achieve. Overtime pays out in work and even for the person itself if something has been achieved. A e.g. developer that works all through the night coding and coding without having achieved what he wanted in the morning is only half the developer for the coming hours and days. But if you clearly achieve your realistic goal you are happy and produce endorphins. Your body is powered up and you can shed any sorrows of work. This is the best sleep you will have for months!

Keep the Balance (or “The equilibrium of Life and Work”)

And love dares you to care for
The people on the edge of the night

Overtime happens and crunching some work, too. This can be manageable to some degree. But if your whole purpose in life is work and you are crunching everyday, hour after hour, seeing sunlight only as a reflection of your display you will “dry up”.
As much as overtime has to be planned (see above), the balance of overtime, crunching and regeneration has to be maintained, too. Otherwise the productivity and benefit of the additional time decreases down to a (negative) point of no return… yes, negative. At least in many cases I have seen, people actually fixed and created productive resources and code up to a specific point where the amount of positives fell below the amount of negatives. And this just happened over one day. When days went by the amount of time that produced good quality decreases and got inferior to the amount of time producing crap. And the most important issue is: Those errors have to be cleaned up, too!
This is something general and may sound corny but to keep a good Work/Life balance is most important and overtime is no contradiction to it. But crunched overtime needs different compensation to be regenerated. As mentioned, a good night sleep might not be enough for a 80 hours week. Fresh air, sunlight, healthy drinks and food are a necessity to “survive” not only the crunch-time but the time after (downfall).

Post Mortem (or “The Lessons we Learned”)

This is our last dance
This is ourselves
Under pressure

This is not necessarily something to keep in mind during a crunch-phase but afterwards. Always recapture what happened! Always try to learn from the lessons made! A retrospective or post mortem should help to pinpoint problems, miscommunication, bad planning etc. for the coming tasks and have to be used for positive and directed criticism.
A review about every process, not only meta or technical processes but also socially can help to suffocate future errors. Especially critique is hard to deal with and often taken personal. But what directed criticism (a director that guides the review is most important. Reviewing, not discussing when it comes to focussed critique) should provide is what we require to grow, to evolve. Because that is what we all want: To become better! It may sound unfortunate but people outside ourself often provide a better view on us than we ever can.
Therefore, always have a retrospective, a review, a post mortem, a lessons learned meeting, … call it whatever you (or your project management philosophy) like, but do it!

So, if we have a look at the lessons we learned:

  • State the obvious
  • Plan your overtime
  • Be social
  • Be realistic
  • Keep a Work/Life balance

and always recapture your work!

All this may sound general and soooo obvious but after weeks of overtime, pressure from the management and the deadline coming near it gets lost pretty easy.
Overtime happens and sometimes it can even be fun to see “this one feature being finished”, “this one bug being fixed”, especially in a nice social environment. Nevertheless, if you have to crunch keep in mind that not everybody is in the right state of mind and always remember some general work rules… maybe even pin them on a wall in front of you!

Written for #AltDevBlogADay

(Don’t Fear) The (C)Reaper

I have to be honest: My C and C++ skills are bad! Besides some personal tries in my “early years” and lessons at the University I never had a good connection to the world of C. Nevertheless, I got my degree, became a developer, working for nearly a decade now in my young life and I call myself a successful Software Engineer, even developing games… but in Java, JavaScript and C#. So, my dreams of getting into what I love most (Gaming) became real, doing what I like all day!
But still I feel inferior to the “real” developers because of my bad C expertise and especially my personal ignorance to really focus on it.

All my history…

I got in touch with computers and gaming early in my life through my brother. I started with a C16, C64 and Amiga 500 besides my GameBoy until I got my first (nowadays) classic PC. I was always intrigued by what was possible, the magic, playing Pong, Maniac Mansion, Zork and watching Scene Demos and Cracktros from legends such as The Black Lotus or the Animators. I wanted to do the same stuff, I wanted to (text-)wander through my own forests, wanted to have colorful spinning balls on the screen… so I began learning how to do so.
I started off on the Amiga with Assembler, got into QBasic later, Pascal, Delphi (loved its structure), Visual Basic (quick results) and very early PHP (the Internet) through early Web-Development tests and HTML/CSS. About ten years ago I got into Java at version 1.1 and am still on it. At every Job I had before and during my studies I was able/forced to use Java and it kept that way until today. Besides Java I had a look at and use Python, Scala (what I like about Java+functional programming), Actionscript, … and even Perl (just in one project) out of personal interest or for personal stuff.
Just from my history my expertise developed early around object-oriented programming which appealed to me the most, so I stayed. Therefore, very early in my “personal development” my development expertise was already conquered by Delphi and Java that formed my view on OOP and general Application Development besides their originators being influenced by C++ (good or bad). But there were still the games I liked the most. So, I had to learn C and C++. Teach myself the language of my favourite entertainment.

Try, Fail, Ignore

I bought books about C, about C++, about Game Development, about DirectX, about OpenGL, got into boards, searched the net for every tutorial I could find, tried everything and even got some minor things to work so that something moved on my screen… and it was programmed in C++. But something clicked in my head, spreading bad thoughts such as:

  • This could be easier!
  • Linkage and IDE is clumsy, Eclipse is way superior!
  • These design patterns are native in Java!

I read more and more, tried more and more and unfortunately failed more often. The initial fun and ambition faded away with every single compilation that turned out to not work as expected, crashed or ended up in memory leaks.
Even with every interest and devotion I had to learn, to me it was “just” another syntax complicating things. Pretty much everything I learned and did I was able to reproduce in Java in less time and with more comfort and less errors. I got lazy!

Coding Personality

So, even if I tried to seriously learn and get into C and C++ it just did not reach me, did not touch me. From my history and my experience with other languages, IDEs and projects I knew that there were different ways to achieve nearly the same things. And it was not only my laziness from very elegant development environments or library usage, also the code itself appeared to be cryptic to my eyes.
No matter if I read Java, Python or PHP code nowadays: Besides the fact that every code can be beautiful and ugly I understand Java code instantly; I recognize the Python functionality; I get what the PHP developer meant to do! Even in the last years as I was checking examples and help sites for iOS and Android NDK coding out of interest I could not get rid of the thought: I can achieve the same thing with the Android SDK! (PS: Objective-C is pretty ugly ^^)
And it is not that I do not like any other languages any more: I was “forced” to use Haskell and dismissed it; tried Scala and loved it! Fooled around with Ruby and had fun; Prolog and Lisp… na; Eiffel and C#, olé!
Especially C# instantly appealed to me: The syntax, the structures, the functionality and the ideas filled the holes that Java left over the years. It may be a coincidence that Anders Hejlsberg, a main man behind “my” Delphi is the lead designer of C# but maybe we think alike. And with the advent of XNA I even had a connection to game development again… and it started with a C! The commonality of course was a similar syntax, similar principles and the idea of a Virtual Machine executing and “managing” my code. No changes for specific operating systems (at least in the perfect sales world ^^), just develop and it would work… now with easy native Windows “ways”!
But the thing that always struck me again were games. Even XNA seemed “unreal” for real game developments.

Games are developed in C

If I would have gotten one cent for every time I read this exact line on a board, tutorial or e-mail… you know what I would be then as you probably think the same right now. And I believed it! It was like this; It stays like this!
But over time I got more experienced in developing and engineering applications and solutions and I realized that in most cases the programming language is just the tool to fulfil the requirements: And my requirement was still to make the things I have in my head!
I started to look around and found games such as Spiral Knights, Puzzle Pirates, Jake2 (a Quake2 Java port), Chrome using Java for scripting and even EVE Online from CCP. A Server and Client nearly 100% developed in Stackless Python; a dynamic programming language in a multi-micro-threaded environment. Easy to read and learn, hard to master.
But probably the biggest counterexample today would be Minecraft. The biggest Indie sensation last year is developed in Java and even if I never really got into the game, I admire Notch for what he did and achieved… and everything in Java. And Minecraft was not the first but Wurm Online already showed where Notch could go… in Java.

With these great examples of Games not developed in C/C++ I felt more confident in following my own way that I have successfully gone for years now.

To be or TioBe

I do not intend to defy C or C++ but if I am not required to use C for the games I want to create and other segments and industries can be conquered by languages such as Java, too (as shown in the Tiobe Index), why should I?
Especially in enterprise environments Java is a strong candidate for projects: From a Manager perspective the Java salesman argues with operating system independence, easy extended library architecture, basic native Database framework and UI support… sold! Enterprise Java is still a keyword for international research projects today. And with JME and Android even the mobile sector is invaded by Java for years now.
And with Android supporting Java as well as Microsoft supporting C# I can be everywhere: On PCs, on Consoles, on Mobile Phones and on Browsers. With languages I know, am experienced with and that appeal to me.
So, do I still have to put all my power in re-learning what I already know in other languages? Where I have intensive practical knowledge? Where I can craft my dreams?

Ignorance is bliss

Even with my underwhelming C skills I get along very well. Tiobe proves me right and until now I always solved the problems given to me or achieved and created what I wanted. I am working in the games industry, worked on large and international projects for big companies, wrote some publications and most results were accepted just fine. I even remember some projects and programs created by me that I am still proud of and this does not happen very often as every developer I know normally wants to change the code he wrote the second he/she finished the last line ^^.
I am aware that for the last performance tweak, for the most awesome graphics engine I would have to use C (or Assembler) and I am aware that the foundation for all that I use such as the Java or the .net VM an explicit knowledge is required. Nevertheless, I do not state that nobody should use C or C++. It is just that I want to raise awareness for people that complain about people not knowing C, labelling these as non-programmers. These guys are able, too. And if they want to Write Games, not Engines they might even be better for game logic and not “just” tools. These guys are also able to know what really happens underneath as that is a mandatory pre-requisite and not the knowledge of a syntax.
Therefore, besides all my years trying to get into “the game” of learning C and C++ I turned out pretty well, with experience in large projects, systems and now games. I call myself a game developer. And if many decline my languages I decide for myself that (C+)Ignorance Is Bliss…

Part of the Challenge: Show your ignorance! for #AltDevBlogADay

3…2…1… planned!

No matter what we do, if we are Agile or fall down the water… if we are senior or junior… if it is big or small… for nearly everything we do we have to define tasks and estimations to plan the days, weeks and months to come. Again, no matter what, this (especially first) planning is in most cases (and from personal experience most means 90%) pretty far from what is really required in the end. The other 10% split up into 1) the ones that planned good but not 100% correct, maybe used “proven” methodologies such as PERT or just estimated +30% and 2) the ones where the planning perfectly fit the development (again in my experience normally 1%-3%).
So, what you could say is to just “do it” like the 1%-3% did. This would normally be the way to go if theirs worked out. The thing is, from everything I have seen in project planning over the years: It just worked because of luck!

I think it is fair to say that I learned project planning pretty much from the practical side, always failing what I learned theoretically. No mater how much time I spend planning big projects, setting up tasks, goals, milestones, reviews, reworks, … it never got into this 1-3 frame.
Even with a more agile-driven approach, small sprints, good daily tasks, weekly reviews and time consuming remodelling of the plan: If I sum up what had to be reworked every single week I was as far away as with the initial waterfall plan. All goals got achieved and “somehow” it worked out but it is disappointing for the one who planned to see his estimations being more a guideline than a workplan.
Based on that experience I started thinking: What are the reasons for such divergence? What am I planning wrong? What do I have to change to fit the developers needs? And that is what strucked me: The Developer!

…to be busy!

With all IT projects I had to work on, the main time is consumed by the developers, the engineers, the architects of the (mostly) software projects. Of course Game Design, Art, etc. have to be taken into account but are often more parallel to what goes wrong more often: The actual development or implementation! (no question, thinking lean everybody should care about downtimes because of unfinished output/input)
As a developer myself that has to plan for others, estimate work, thinking about the production, milestones etc. none of the “theoretical” methodologies really worked out for me but just took my time. And in most cases this time is very limited. Estimations have to be given instantly to evaluate feasibility, plans have to be set-up initially to have a higher model to work and further estimate on. So, time is of the essence not only in the plan itself but also for the time to create it. And if I have to rework it all the time (real-life) I do not want to spend too much time in that phase (no time for building up charts with optimistic, pessimistic and realistic plans…).

…should be enough!

In a coincidence Jake Simpson gave a pretty good impression of this wonderful land, where everything works out. It is known as Should Be Land. This is normally the land where the estimations come from, too. From developers that should estimate their tasks, should give away an idea how long each of it could take to make a plan that also has to tie in with other departments (lean everywhere). If such an estimation fails because “36 hours should be enough!” more often others that depend on you are delayed, too.
Especially inexperienced developers, juniors and fresh “hackers” from the backyard tend to underestimate especially the requirements in correlation with others, to plan interfaces, to build adapters to dock onto others and so on. Nevertheless, seniors aren’t better in general. All people that “program” stuff normally just plan the programming time… and they do not want to plan too much time as the developer is often assessed based on his Cph (Code per hour) output and not based on his quality of code, re-usability, extendibility or tests. The results are in many cases optimistic estimations with no or little time to even plan what you are going to develop.

…am no developer!

Another often misleading planning element is that (many) project managers, scrum masters, gantt-junkies, … do not have the best development background. Therefore, the estimations given are taken as fixed. Experienced managers add an amount of 30% and plan it in. This is unfortunate as even the best estimation cannot just be coped by some time-addition if essential points that are requirements for good development are missing.

One of Two of Three

Besides complicated methodologies or the adding of just 30%-50% of time to an initial estimation given, I split it up into the three tasks I want to see as an output from a developer: The implementation (or coding, hacking, programming, refactoring, …), the planning and the tests!

  • The development is the actual implementation of the task. It may be the creation of a user-system, achievements, tool, crafting, … whatever comes to mind
  • The planning is the structuring of work, the evaluation of patterns, architecture and interfaces to follow during development and precedes it accordingly
  • The testing is no QA process but the personal testing of code, writing of (unit-)tests, maybe even playing the created and succeeds the development

Now, instead of adding a specific amount to a given estimation I add tasks to the estimation. My input is the implementation estimation from a developer. Based on that I add two thirds as planning and one third of that as testing resulting in the three tasks of implementation, planning and testing with a weight of 1/3 of 2/3 of 3/3. For example, if an estimation is 9 hours, I add a task for planning with 6 hours and a task for testing with 2 hours.

Yes, the result is a very keen estimation but the important part for me is that it covers mandatory tasks that are often forgotten and is also able to compensate for possible misjudgement, unforeseen circumstances, … as the package is given as one. The creation of these tasks remind the developer what he “should” do, and the derived estimations compensate for possible problems as well as they fit the real necessity for the other tasks (at least in my experience).
The tasks are important as normally you do not start hacking instantly. To evaluate existing code, interfaces and elaborate what architecture or pattern to use is often more practical and a necessity in general before starting to implement (to think something through before starting programming). To already know what the result should be helps the implementation. And the testing part may be the coders worst nightmare but again a requirement.

The most important point for me is: It’s easy! I can easily derive it in my mind, do have a most-likely accurate estimation (future may prove me wrong ^^) and won’t forget the importance of planning and testing.
If you follow up different approaches the weighting can also be adapted either by mixing tasks or changing the base weight. For example, if you are following a Test-first approach you can either switch the planning and testing tasks, as the testing in TDD also compensates planning partly. Or you can change the base to 4 and plan 1/4 of 3/4 of 4/4 meaning for our example implement 8 hours, test-first for 6 hours and plan for 2 hours (bare with me as I selected easy to calculate estimations).
What base to use depends on personal experience, the project and just the most important gut feeling. For me myself a third for general estimations and a fifth (1/5 of 2/5 of 5/5) for more specific tasks paid out. But all in general split up into my three main tasks I instantly have an estimation ready that fits at least my real-world.

…should work!

Please keep in mind this has no theoretically proven background but my experience over the years experimenting with different approaches and using the methodologies given in literature. Everything depends on your environment and personal likes and dislikes. It “should” work for other instances, too. I used it in several personal standalone and living project estimations and at least for now it was fitting best.
In my environment, with the time given and the amount of work to do this approach works. It is never really off the track, it reminds people about planning and testing besides the actual hacking and helps me to easily keep track about developments without spending too much time in overblown concepts that do not fit my personal habits or the “real” developer.
Of course, there are also drawbacks, such as too little/too much planning. If you split up e.g. User Stories to have tasks such as: Build divideByZero() function; Create class object; Write SQL Statement for querying all users; … you will end up in unnecessary tasks because of the simplicity. In such cases, the User Story “should be” the one to estimate and divide onto the tasks or you reduce the base and introduce a zero/x task.
Therefore, this may not be the 100% 1-3 approach but it fits me best and therefore leads me into that frame more often as the important thing is the variance that fuels this approach… and that can make it work for you, too!

Written for #AltDevBlogADay

If you want to install Mac OS X in a VirtualBox on an Intel PC…

…maybe the following can help!

For about a week now I am owner of an iPad and it is actually pretty nice. Air Video Server allows me to watch all my stuff on my couch or bed and some of the games during Christmas sale are pretty nice, especially Dungeon Defenders and Angry Birds of course.

So, as I am a Developer I just want to develop and test some stuff on my iPad, to get into the iOS environment and broaden my horizon. Just out of curiosity. But the thing with nearly all Apple products in general is: They work fine… in their controlled AND closed environment!

If you want to develop for iOS you actually have to own an Intel Mac (i.e. Mac Mini, Macbook etc.). At least this is the only official way. There are some other possibilities such as with the Dragonfire SDK that builds a whole SDK for iOS development on Windows based on “normal” C/C++ or to use engines such as Unity and Shiva3D that deploy onto iOS (actually Flash CS5 also provides an iOS publishing profile). But this is not what I want, I just want to try some things and get into the normal iOS development with XCode and Objective-C.

So as I decided to not follow some of the other ways I needed to get hold of a Mac Machine. But I am not willing to pay at least 600€ for a Mac Mini just to “play around” with iOS. So I had to find a way to install Mac OS X Snow Leopard on my machine. As this is not supported by Apple and a little tricky I did not want to start installing it directly to my notebook but thought about trying it in a virtual machine. “Good thought” you might say… but tricky as I noticed!

I chose VirtualBox as the Virtual Environment to test an installation. I did some googling and actually found many MANY resources, tutorials and board messages explaining how to do it. I am pretty sure I read nearly all of them and experimented for about 3 days to get it running… day and night (got no sleep during that time as I wanted it to run). But all the time something went wrong.
That one didn’t boot! The other one wouldn’t get my network card right. Audio is still a big problem… and so on and so forth. But all of a sudden, as I was already giving up (really!!!) I found two excellent WebSites and Tutorials that got it working for me.

The one that started the effort again was flyNflip. It was the first very clear and simple tutorial how to get started. That was the first easy tutorial that got me to a “just working” installation. The only thing you need is to get hold of a Snow Leopard DVD. I got an update Version, it’s not too expensive.
But it wouldn’t boot! What did I do wrong? Going through the comments I found the second WebSite at Sysprobs. It showed nearly the same steps how to install MacOSX in a VirtualBox but went a little further. It explains the EFI problem. It gives examples and further tutorials for getting rid of issues and more.

So now I have a running MacOSX Snow Leopard running in my VirtualBox. It’s also up to date (10.6.5) and actually somehow smooth for a virtual machine. Currently I am installing XCode and the iOS SDK (as you can see on the picture) and I hope everything will work out. If so, maybe I will post some stuff about iPad development…

MacOSX running in a VirtualBox
MacOSX running in a VirtualBox

Nevertheless, I am still very curious about why Apple does try to prevent you to explore MacOSX on other Machines (actually I am pretty sure I know why ^^’). But at least for developers that want to develop iOS Apps on a Windows or Linux Machine they should give out a possibility to do so. I would never buy a Mac Machine just to do so but I am very curious to try it and maybe develop the next #1 App ^^. They nearly lost me during the process and what I did is actually neither supported nor endorsed by Apple. I hope I won’t get into trouble but as I am owning an iPad and want to give something back I hope not.

So, I will get back to my MacOSX installation (XCode and iOS SDK installation is nearly finished) and hope I provided two good resources for others that think like me.

*UPDATE* XCode and iOS SDK installed. Works fine! But I have trouble if I want to set a widescreen display resolution. 4:3 works fine (e.g. 1280×1024), widescreen has some problems (e.g. 1440×900). It still boots but the redrawing of the window is somehow messed up.