Archive for November, 2009

Maintaining THE WALL Between Development and Production Environments

Thursday, November 26th, 2009

Yesterday I was reminded, again, of one of my very old and very important rules:  any company involved with development must maintain a wall between their development and production environments.  A related important rule is that only releases that have been subjected to sufficient testing should be used to update production environments.


How did an ‘old development management sea dog’ like me get stung by not complying with these rules?  Well, truth be told, the reason lies with succumbing to two of the seven deadly sins!

Isn’t it funny how most of our human errors can be traced back to that simple list?


My first sin this week was GREED–specifically, feature greed.  Talking only about development related sins, of course!  I wanted to show the potential client ‘the lastest and greatest’, ‘the best that we could be’, ‘the goodies’, you name it!  My rationale was that there had not been many changes to the application executables in the prior week, so after carefully testing the changed functions and passing them I thought I was good to go with the new stuff.

In the haze of feature greed, I forgot about our old friends ripple effect and fault feedback ratio.


Ripple effect, for those not in the business, is when you change something ‘over here’ in your code and it breaks something ‘over there’ in your code.  It can be the fault of the developer not following through with a change to all affected areas or it can be the fault of the project manager not seeing all the modules that will be affected by a change and assigning all developers that own potentially affected code to do the necessary work to deal with the effects of the change.

In this particular case, I took a delivery from one developer that resolved some errors in one module, all of which tested good, then I took another delivery of fixes from another developer in a different module, all of which tested good.  It was the combining of the deliveries that was the problem–fixed functionality in the one module made code in the other module (changed weeks ago and tested OK at that time), blow up on finding data again where it was expected but in a now unexpected format.


The essence of humanity, none of us is perfect, is demonstrated through the fault feedback ratio.  Nobody can go in and fix an issue without ever breaking anything else.  Developers that are very careful will have an extremely low fault feedback ratio, but even the best will break something else when they go in to make the fix.

This was the root of the issue that caused the error during the demo.  Code that at one time was synchronized with the output of the other module became unprepared for all possible variants of that output when a change was made in module A by developer Z.  This remained undiscovered for weeks because prior to Z making the change in A, developer X had broken module B so that it had no output whatsoever.  So only after the now broken module A was put together with the now fixed module B did the new error show up.


The net cost of my sins is the endangering of a sale that should have been a lot easier of a closing.

If you are not completely willing to bear the worst case possible outcome of a blown demonstration or production roll out, don’t release to production before something is completely ready and tested!  Given the vast cost of ‘broken arrow’ releases when you multiply by the number of users that you will take down for a while, premature production releases are simply not worth it.


In my case, the decision was mine–no one was pressuring me for a premature release.

In many other cases, possibly yours, there is plenty of external pressure for you to release before you know you are really ready to.  You must resist, do not join the dark side!  For those that persist, quantify the worst case scenario in terms of user downtime and lost sales (or whatever set of effects would result) and the overeager stakeholder should relent from pressuring you…at least a little bit.


The second sin of the week was SLOTH.  In today’s parlance ‘laziness’.  I like to make it sound less bound-for-hellish by saying ‘lack of time’.  A proper amount of time allocated to make the proper tests would have exposed the issue and ensured that I stayed on the last release for the demonstration instead of the new stuff.

If one cannot make the time to get the proper tests done, one should not use the new release.  My failing was having a busy week with other matters and simply not having the time available, but in still doing the point tests and thinking that was enough.  It wasn’t, of course.  Complete system tests need to be done prior to releasing any code from development to production environments.

Licking my wounds,

Dean Whitford, B.Comm.

Chief Operating Officer


DraftLogic Electrical and Global Warming?

Thursday, November 19th, 2009

OK, so what does a building electrical systems design expert system have to do with global warming?  Well, it turns out that building electrical systems design, whether done with DraftLogic Electrical or in some other manner, has a LOT to do with global warming.  This is because efficient, right-sized, and accurate design reduces global warming.  Inversely: inefficient, over-sized, and inaccurate design contributes to global warming.


Since some of you may not believe in the science behind the global warming threat, how about we talk about things that every building owner desires: safe design that minimizes construction and operating costs and yet meets any future expansion needs they identify.  The choices made by the building electrical systems designer are the same to meet these needs and to minimize the building’s contribution to global warming.


Efficient, right-sized, and accurate design will result in the minimum amount of devices required to provided the needed level of services.  With the right amount of devices, connected in the most efficient manner, the materials required to service the building will be minimized.  This reduces the cost of construction and at the same time reduces the amounts of these goods that have to be manufactured from raw materials and then transported to the building site, thus contributing less to global warming during the construction phase.


Once built, the right-sized design will consume less electricity than an overbuilt design.  This reduces operating costs and, of course, contributes less to global warming assuming at least some of the energy is coming from non-renewable resources.

Whether you believe in global warming or not, I think we can all agree that efficient, right-sized, and accurate design is better.


How do we make efficient, right-sized, and accurate design happen?  Well, that means bringing back some of the detailed and repetitive calculations/analysis into building electrical systems design:  performing lighting calculations for each space (e.g. zonal cavity), supplying the needed lighting/receptacles for each type of space and no more, circuiting efficiently to maximize neutral sharing, drawing branch circuit wiring to also maximize neutral sharing while minimizing the amount of wire needed to interconnect everything, and finally sizing the feeders to be safe but not overly large.  Some designers at some companies have been forced to reduce the accuracy of their selections in these regards due to design time limitations and design fee budget limitations.

Performing the detailed calculations and analysis required for the above would add A LOT of time to how long it takes you to design a project, unless your CAD suddenly got A LOT smarter and did the extra work for you!


Enter DraftLogic Electrical!  DraftLogic Electrical inherently does all those detailed calculations that you don’t have time to do and may have been forced to ‘rule of thumb’ and ‘educated guess’ around.  With DraftLogic Electrical, you take a lot less time to do a lot more accurate design work.  That’s good for you, good for your client, and good for the environment.


Since I cannot resist the urge to preach, and this is MY blog, I am going to do it.

Above I detail how the decisions made by building electrical systems designers affect global warming.  Talking above that major and contentious issue, we can also say ‘the sustainability of the environment that our species lives within’.  Basically ANY job that ANY one of us is doing can also be shown to have similar effects–do you want to be part of the problem or part of the solution?

Kindest Regards,
Dean Whitford
Chief Operating Officer

Software Engineering and Hockey–Not As Different as You May Think!

Thursday, November 12th, 2009


Hockey is one of my favorite sports.  I love to play it, sometimes I like to watch it.  Even when you are playing, though, you watch while you are on the bench waiting for your shift.  It is when watching that one gains the most appreciation for the strategy and finer points of the game.  This is our first similarity to software engineering: the unbiased perspective, where you can really ‘see’ what is going on, is only gained from being ‘off the ice’ or ‘not a part of the project’.  In either case, if you are in the action your perspective is tainted and you will make errors that you would definitely not if you were ‘off the ice’.  You’ll be a better hockey player, or member of an application development team, if you can lift your mind out of ‘the game’ to look at what is going on with some degree of detachment from time to time.

In the same light, I used to tell a sales manager that worked for me to ‘lift your head out of the swamp for a while each day’.  I said thus because he was getting so mired in the day-to-day goings on that he was losing sight of what we were needing to achieve in the coming weeks, months, and years.


Managing software engineering is my work.  The ‘game’ is application development, played out in offices and on computers instead of on a few centimeters of frozen water.  It takes a team to build an application, the larger and more complex the application, the more varied and specialized the roles on the team.  The need for a team composed of people fulfilling different roles is our next similarity.  In hockey, there are two wingers, a center, two defensemen, and a goalie on the ice at one time.  Each role requires a different mindset and skills to do it well.  In software engineering, we have project managers, software architects, developers, quality assurance staff, and documentation staff.


Each hockey team has one and only one captain, just as there must be one and only one head project manager for each application development project.  Someone has to be the boss, able to make the decisions that must be made in a timely manner.  Oh yes, there are assistant captains and vice-project managers, but the reality is that there needs to be a bit of benevolent dictatorship going on for a team of either kind to be effective.  Some of the toughest decisions involve whom is going to do what.  The average winger can’t play defense very well, and let’s not even talk about putting anyone but a born and bred goalie in-between the pipes!  In the same light, most quality assurance staff would not make good developers.  So, in both hockey and in application development, the ‘captain’ has to put each team member in the role that their skills and temperament make them most suitable to fulfill.  And those that don’t fit any of the roles needed on the team should be playing on a different team 🙂


Finally, let’s talk about execution.  In hockey as in application development, ‘the devil is in the details’.  All the little things have to come together to make the big thing a success.  Passes that are crisp and on the tape, everyone giving their all from start to finish of the game, and playing your position properly.  These are the elements of winning on the ice.  Play the details right and the game will go your way.  The same goes for application development–good system architecture, robust and efficient code, effective quality assurance, and good usability make for a successful application development project.

OK, so MAYBE this whole blog entry is really kind of stretch…but at least I got to write about what I do and what I like to do!

Yours in software engineering,
Dean Whitford, B.Comm.
Chief Operating Officer

That Last Percent of the Work

Wednesday, November 4th, 2009

I have been a software engineering project manager for eleven years now, and it still amazes me how painful it is and how long it takes to finish up that ‘last percent’ of the work!

“That doesn’t make any sense” you say?  Or perhaps you think the last percent of work should not be any different from any other part of the work?

Following that logic, our 20,000ish hour development effort should take a couple hundred hours for the last one percent to be finished.  The reality is we have burned five hundred or so hours and counting.  The light at the end of the tunnel is in sight but we are crawling toward it instead of running!

There are a number of reasons for that last percent of work taking such a disproportionate amount of effort to finish off.


A big part of it is the developer’s succumbing to human nature and the project manager letting such a thing happen…because it seems harmless when you consider the decisions one by one.  What I am referring to is most people’s tendency to go for the ‘easy kills’ when assigned a number of tasks.  We do that because we like the feeling of success and forward progress we get when we finish a task.  In organizations where there is performance evaluation based on completing tasks and having quality assurance sign off on them, going for the easy kills is sometimes an act of survival, i.e. the system makes you do it, regardless of whether you want to or not.

The project manager’s part of this is either not assigning a priority to the tasks to ensure that things get done in a particular order regardless of their relative difficulty OR in letting it slide when some of the developers skip the hard stuff.  My sin is usually the second type.


Another material contributor to the hours spent finishing the last percent of work is dealing with more ripple effects than usual.  Once your application is basically completely built, any change you make has a higher probability of affecting more code–much more in both respects than when you are first starting to build the application.


In the same light, requirements changes that are introduced at the end of the job have a higher probability of affecting more code.


So now we know how we got to this painful stage, how do we get through it?

This first thing is to manage the expectations of all the stakeholders.  Let them know that the task list is much shortened from before but that the items left are difficult and risky.  Let any of those who are asking for requirements changes know the full potential impact of their requested change & suggest, where feasible, that the change would be best left until after the first version of the product is tested and released.

The second thing is to maintain the development team’s morale.  Grinding through the tough tasks and dealing with ripple effects is going to be stressful, especially for those who either need or are used to frequent gratification from completing easier tasks.  Congratulate each developer for the small gains that will be made as they happen.  Keep the team updated about progress that is being made.

Finally, stay positive and stay focused!  Your clients on the one side and development team on the other will sense any fear or negativity you feel–and it will adversely affect their attitude.  Celebrate progress made, however glacial.  You need to keep your focus, the clients’ focus, and the development team’s focus all squarely on completing the product.  You will need to be strong in resisting any changes that are not absolutely necessary, whether proposed by the client or by your development team.  In this light, you need to be prepared to re-evaluate the inclusion of any functions that are causing major problems.  If feasible, delay a troublesome function for later in order to expedite the completion of the release.


OK, so you and your team pushed through the resistance and finished the release.  Can you avoid making the ‘last percent’ so painful on the next project?  Well, you won’t be able to get rid of all the nasty nuances tied to the last percent of work, but much of it can be managed away.

Ensure that your developer performance evaluation and task tracking system are sensitive to task difficulty and priority.  Do this by assigning higher completion credit to difficult issues & assigning an exact relative priority to each task.  The completion priorities must be enforced throughout the project, otherwise the tough items will pile up for late in the project.

Throughout the project, carefully evaluate any change requests, whether from clients or the development team, for the degree of ripple effect and for how truly immediate they need to be.  Change requests that can reasonably delayed to a future release should all be delayed.  Ones that cannot be delayed will need to have their time required, including dealing with ripple effect and added quality assurance time, push the project completion date forward commensurately.

Until next time for more software engineering ramblings,
Dean Whitford, B.Comm

Chief Operating Officer