Archives for category: Testing

Tear Down the Wall is a look at a potential future of the testing industry, and one that I quite like. It’s fun to imagine working on a team where the developers have testing knowledge and use it to make their code faster and better, where the testers have coding knowledge and use it to help fix things and make the software better, faster.

That is so far from my reality that I actually find it a little threatening.

Alan sets up a ‘makers and breakers’ environment, and then shows how to do it so much better. I’m in an environment where ‘makers and breakers’ is something we’re still fighting to even have. I’ll leave off the juicy details – doubtless, o testers, you have fought these same fights yourself, and they are frankly stale. In my environment, I think that someone saying ‘get rid of test roles’ would likely be meaning ‘get rid of testing activity,’ which as a great majority of the software-developing world has shown, isn’t a great plan.

That isn’t what Alan means, or what he talks about. I’d love to see a system that functions like this, but I rather doubt I’ll get to see one soon.

Inspired by XKCD and created in The Up-Goer Five Text Editor.

In my job, I make sure the people who tell computers to do things have told them in a way that I can’t break. I work with both people and computers, but more computers than people. I get the computers before they go into the world, and try to break them so that they break where it is safe to do it, not where it would cause a problem.

My job has three parts. First, I make sure the computer does what it was told to do, and does it in the right way. I have to pay very close attention to make sure the computer does it right, or it can make a problem where it looks like it does the right thing, but actually it does it in the wrong way. If there is a problem, I tell the person who told the computer what to do, and they tell the computer to try something else.

The next part is to make sure I can’t break the computer by doing things the wrong way. The computer has to be told how to handle it if I do something wrong, and if I do something wrong in a way the person who told the computer what to do didn’t think about, I may break the computer. I am always happy to find something to break, but the person who told the computer what to do might not be very happy. They will have to tell the computer to do something different.

That part is also hard because people out in the world will find new and interesting ways to be wrong, and I can’t think of all of them, so there will always be problems of that type found when the new thing goes out into the world.

The third part is the one I like best. I have already made sure the thing does what it’s supposed to and doesn’t do what it is not supposed to do. For the last part, I act mean and try to break the computer. Some of them are small problems, like if I can do something that makes it stop doing what it was told. Other times I can make a big problem, like if I can do something I should not be allowed to do, or tell the computer to do something it should not do at all. Sometimes, those are very big problems.

I like finding those problems best, but it is best for the computer if I do not find any problems at all. Sometimes the computer stops working at all when I find a big problem, and then I have to fix it so I can keep finding problems.

Using only the ten hundred most used words for this has been hard, because in my job, I have to use many large words for hard ideas, so explaining them without using the large words makes me work in a different way.

I’m part of a group of mentors in my workplace. We take new testers, teach them all kinds of neat things, and then unleash them on their projects.

Before we started doing that, we were teaching them the specifics of the project they’d be assigned to and cutting them loose. That turned out to be a bad idea – the new testers would learn quickly, excel at one thing, and then fall flat when the project ended or when they were transferred to a different product. That wasn’t at all what we wanted.

We started training on general principles and business model, with a lighter touch on the specifics of a given project. That’s helping quite a bit, but it’s only expanded the area in which people are doing well – it hasn’t solved the problem of new testers being really good at one thing, and being unable to expand their knowledge of one thing to other things.

On further research, the missing skill – that by which I can say “this web application does a lot of the same things as that web application, so I bet they’ll have similar problems and vulnerabilities” – is abstraction, the ability to lift concept from detail, stretch and prod it a bit, and apply it somewhere else to see how it fits.

I’m exploring how to teach this skill. Does anyone have any ideas?

The Mallet is an amazing artifact that, wielded in the hands of a software tester, beats all defects and design flaws out of a program and leaves it shining, newly forged and ready for release. By virtue of its inbuilt magic, it removes user experience problems and makes the programmer’s beautiful intent accessible to users of all levels of skill. Its power is so strong that it can actually correct for environmental problems and hardware inconsistencies before they even happen. A single strike of the Mallet can remove a defect at the code level. Repeated use can turn a program into a work of art, released to its adoring audience to the sounds of cheers and showers of money and fame upon its creators.

Instead, we’re stuck with the messy and occasionally dangerous process of manual testing, automated testing, and using real human beings to get imperfect results.  Whyever would I want to do that, you ask? After all, it could actually break the software! People miss things, and human judgment is fallible. The Mallet seems like a much better way to go. Unfortunately, all tests to create such a thing have failed, and I do not possess the Magical Mallet of Quality.

Rather than relying upon the abstraction of a tester somehow creating quality and infusing it into a program through sheer force of will, it may be more effective to design competently, execute with style and grace, and release with the knowledge that your shining, beautiful product is probably going to make somebody, somewhere, a little less or more happy than they were before they used it.

I have finally pinned down one of the biggest problems I have at work.

I have two different modes I use at work. One is the very literal, very focused mindset I use for testing; I have unimaginatively nicknamed it tester mode. I do my best work when I approach the software without expectations, take in every detail I can and stay focused on what is there, right now. In this mode, it is very easy to take the raw sensory data of interaction with the software and compare it to my pre-written test cases; if anything is wrong, I’m much more likely to catch it, and faster, than I would otherwise. Rather than interacting with my vision of what the software is, I can interact with the software as it is, and explore the parameters of what it can do without worrying about what it should do. Interestingly enough, I tend to slip out of words and use scraps and phrases of music to think (tones have meanings; phrases have complex meanings, and repeating while changing can help me mull over a concept I barely have words to explain.)

The down side is that metaphor is likely to be lost, as will most abstractions; I have a much harder time communicating with other people when operating like this, resulting in very real frustration on both sides of any given conversation. That includes such things as forgetting what the subject was two sentences ago, and instead navigating the grammatical depths of the current sentence for meaning, and tripping over a subject that should have been implicit (was, in fact, from the speaker’s perspective; but wasn’t from mine, and that I’d have gotten while in a different mode.) In addition, I become easy to distract, and will go diving down rabbit holes that weren’t in my test cases, simply because I found a way off the map and wish to explore it. (Naturally, that’s where the best bugs are.)

The second mode is a much more high-level view of things – this is where I interact with ideals of things, can think easily about what the software should be, or what we want it to be, rather than what it is right now. This is the mode I use when I write, where metaphor and abstraction are merely tools and not baffling sidesteps. This is where I write my test cases (what I want the software to be) in order that I might run them in testing mode (what the software is), and this is where I do my less project-oriented writing, be it professional or blog. This is my highly verbal, analytical-thinking communicator mode. I’ll think in words, though the music is still in the background, and I’m much more difficult to distract from whatever I am doing when in communicator mode.

The problem occurs when I must switch from one to the other. It’s not a mood or a shift of intention, it’s a difference in the fundamental way I think about things.  The first state of mind is so focused that it is difficult to just snap right out of it; instead I take a few minutes and move from one state to the next, to avoid failing to switch or lapsing back into the state I just left. For instance, if I get an unexpected phone call, I’ll probably have difficulty communicating for the first bit of it, because I am still operating in tester mode, and the person on the other end expects me to be in communicator mode.  Likewise, it’ll take me a little bit to switch from communicator mode to tester mode, in order to really get down into my project – the switch from communicator to tester is significantly easier, though, so that is much less of a problem.

It would be fascinating to see if there’s any actual difference in brain activity between modes, or if it is all in the software.

I wonder how I can make the transitions faster and smoother? Does anyone else do this, or have any suggestions on it?

I had trouble finding a decent definition of “craftsmanship.”

So far, the best I’ve got is “the work of a craftsman.” That is not exactly useful, when I want to apply the idea of craftsmanship to testing. So, here goes: what is craftsmanship? I have experience with a few other disciplines that helps me to think about it here.

One of those areas is the art of silversmithing. If I were looking at a piece of jewelry to determine its level of craftsmanship, I would look at the joins: are its soldered joins even and well made? Are there gaps? Are its cold connections flush and well polished, or will they snag and come undone? Is the silver clean and bright, and is its finish consistent and intentional? Is the backplate warped, has the wire been damaged, are the prongs symmetrical, does the stone rattle in its setting?  Is the metal of good quality, is the stone beautiful? Are the materials that went into the jewel worth the work that was done to them?

All of that relates to the quality of the piece without saying what it is, but quality is not all of craftsmanship.

If I have a spectacular gemstone, and put it in a poor setting, I have not completed an example of good craftsmanship. If I have a spectacular piece of metalwork, and place a “blah” gemstone in it, the result is the same. Even if I have both an excellent stone and an excellent setting, and they do not complement one another but clash, I still have not created a piece with exemplary craftsmanship. Thus, craftsmanship also has a component of design. If it has not been designed well, it cannot perform its function.

If I have a piece of jewelry with a stunning design, excellent components, high quality of construction, and such enormous weight that it is not wearable for more than a few minutes without discomfort, I still have not satisfied the demands of craftsmanship. In order for me to consider it a good piece, it must perform its function.

The first two components are indispensable to the third. In order to perform its function, a piece must be designed well and created well.

So, is craftsmanship the combination of well-thought design, quality of construction, and performance of its function, or is there more to it than that?

If I’ve nailed the definition down, craftsmanship in software would start at the design level. The product would need to be designed, coded, and tested, all with craftsmanship in mind. The designers, coders, and testers would all need to work with their utmost level of skill in order to deliver a product that is well designed to perform its function, that is created in order to perform that function well, and that actually does perform out in the real world.

But what does that mean, for a tester?

I cannot guarantee that the design will optimize the product to perform its function, but if I have a hand in that design, I can push it in the right direction. I cannot guarantee that the coders have used all their skill to create a program that does its thing, but I can test it to make sure it actually does it, under a variety of circumstances.

Is craftsmanship any different from quality assurance?

I think it is, so I’ve missed something in the definition.

Looks like I need to think a little more on this.

I am not a fan of conflict across departments. You know the kind – the DBAs don’t like the software developers, the developers don’t like QA, QA wrangles constantly with the project managers. All that kind of conflict manages to accomplish is spreading strife and interfering with a process that, from my point of view, doesn’t need any more interference.

Where I work, QA does wrangle constantly with the project managers. We have different ideas of what “quality” means. We have different ideas on how long, exactly things will take (and the software developers are different again.)

In the interest of fostering communication across departments, I am going to do a little cross-department snooping: I’m going to shadow with the product owners, find out how they do what they do and why, and see if I can’t feed it back into the QA process.

This ought to be interesting.