💡 Download the complete guide to AI-generated synthetic data!
Go to the ebook
Episode 28

Data challenges in software testing with Maaret Pyhäjärvi

Hosted by
Alexandra Ebert
Why should companies take software testing seriously? What are the data challenges testers and QA engineers encounter? What kind of tools do they need to meet them? Listen to Maaret Pyhäjärvi talk about her passion, software testing! From this conversation, you can find out:
  • why continuous testing is needed,
  • how handling data privacy requires specific approaches,
  • what is exploratory testing, and why automation and data are central to that,
  • why realistic production-like test data is so important,
  • how to tackle business rules with test data,
  • what's the difference between testing functionality and testing data journeys,
  • why testing in the insurance industry starts with the data,
  • why all testing is, in fact data-driven.
  • and why data access is one of the major challenges in testing.

Transcript

Alexandra Ebert: Welcome to the 28th episode of the Data Democratization Podcast, where we talk about data privacy, data democratization, and all the doors synthetic data can unlock. I'm Alexandra Ebert, your host, and MOSTLY AI's Chief Trust Officer. Today, I'm super excited to bring our first-ever episode about software testing to you. If you regularly listen to our show, you know that ensuring privacy while retaining data utility is a huge challenge for AI development that synthetic data helps to solve. Also, when it comes to software testing, privacy is a main show stopper.

If it weren't for privacy laws, businesses would use production data to test systems. But of course, that's not an option. At least, not a privacy-compliant one. While traditional test data generation tools provide nothing but headaches to software testers around the world, synthetic data can make fast test data generation a breeze with data that's not only super realistic, but also retains business rules.

Today, I'm joined by Maaret Pyhäjärvi, a super experienced test engineer. She worked years in the insurance industry, but also numerous other areas from real estate to manufacturing. With her extensive experience and her contagious passion for testing, Maaret is not only the walking glossary of software testing terms but also sought-after leader and speaker for many events around testing.

Today, we talked about the data challenges of software testing that she has to tackle every day. She shared also some of her testing best practices and how she expects the field to evolve. Lastly, we talked a bit about the role of synthetic data for software testing. With that, let's immerse ourself in the challenging yet impactful world of software testing.

Alexandra: Welcome, Maaret. It's really great to have you on the show. You are the first test engineer and testing expert that I have the pleasure to talking to in the podcast since we ventured also in the testing space with our synthetic data solutions. A very warm welcome, and I'm very much looking forward to all the insights that you will share today. Before we get to that, could you maybe briefly introduce yourself to our listeners?

Maaret Pyhäjärvi: Actually, I'll go back. My name is Maaret Pyhäjärvi. I've been a test and in IT industry in general for 25 years. I started by accident. I knew Greek language. That was one of my things. I apparently knew how to use computers because I was studying computer science and those two together, I ended up testing Microsoft office. That was where I started from 25 years ago. I've been an engineering manager. I've been a tester. Right now, I work as a principal test engineer in Vaisala, and we do weather-related applications.

Alexandra: Very, very interesting. If I remember correctly from one of our earlier conversations, you also worked in the insurance space quite a lot, which is something that's of particular interest to our listeners who come mainly from the insurance and the banking sector. If you have some insights in testing from those industries where more sensitive and personal data is used, then I would also be very interested in learning more about that.

One thing that would be interesting for me, you mentioned that you ventured into testing actually by accident, but you wouldn't stay in an area that's not something you're passionate about for that duration that you've been in testing. Why are you passionate about testing? What excites you about the field?

Maaret: 25 years ago when I started, I thought no one values testers. I thought it was underappreciated. For a moment, after a year of testing, I ventured into being a developer and I realized that good testers are much more valued than average developers, so I ventured then back. I discovered what it is about testing. That's what brought me back and made me realize that I'm actually really good at that. Again, growing things year by year, you get even better at that. I think what I mostly enjoy is this idea that testing, it's about breaking illusions, basically.

You look for something that other people think is working, they think they designed it so that it already should be okay, but you are looking to surprise them. You're looking to find ways of just showing them that the things they believe are true might not be true, and you're trying to-- I often call myself a feedback fairy. You're trying to deliver that message with a smile on my face and being nice and kind about it. It's not that I'm trying to find the weak spots of people. We have weak spots in understanding fairly complicated systems. That's what testing is about. It's just like lending yet another person with brain power focused into the possible problems and finding those that--

Basically, that's what I bring into the project and I think that's like Sherlock Holmes type of work. It's so much fun, and there's so many different dimensions, so much different work that there's never a dull day. That's why I'm in this profession.

Alexandra: Now that you describe it, I definitely can understand what makes you passionate about the topic. I also think as you described it, of course, it's important to not be competitive in the way of discovering and then to find flaws in the work of others, but basically just having a fresh pair of eyes that was not involved in developing a application from scratch to really help identify if there are some weak spots, blind spots where everybody can work together to improve the final product and improve the user experience for the users. Very interesting.

Maaret: My product owner, just couple of days ago, the new one, I started in a new project in my company. He asked me, "Why aren't we just hiring developers, one more developer into the team, and everyone could pitch into testing. Why do we have a specialist? What's the difference there?" I was giving him this example that just previous evening, I was doing some testing and I found a bug that I even know how to fix. I can fix bugs, and I had basically about half an hour of time and I needed to make a choice of, do I spend that half an hour in continuing testing so that we know the most we can after that 30 minutes or will I fix the bug and it will take maybe 15, 20, even or half an hour.

If I was a developer, if that was what I centered, I would've probably fixed the bug, because it needed to be fixed. Instead, I continued and I found yet another thing, and both of them got fixed very quickly next morning when people were again back at the office. I'm a bit nocturnal, I like working in the evening shifts. That's my thing in many ways.

Alexandra: Understood. Your advice to our listeners would also be to never combine necessarily testing and engineering, but have separate roles because of the different priorities and focus areas.

Maaret: My advice is that testing is too important to be left just for testers. It's also too important to be left just for developers. You need enough focus, enough centering testing so that you don't end up on just the basic level, but you can raise the level up to where your users are truly happy. If you think of testing, it's like fast-forwarding the whole user's experience in production. When you think of trying to simulate within a week, or maybe even within a day, the entire year of data that happens in production, the whole life cycle, any event, you're trying to do that in testing, it actually requires some thought and it also requires time and eyes and observation on whatever results you are seeing.

Alexandra: That makes sense. Out of curiosity, since you're stating that testing is too important to be left to testers, who else should be involved? I would guess the software engineers, but anybody else?

Maaret: The product owners, definitely. I've spent a lot of time this week with the product owner creating acceptance criteria together. It might be that I'm writing the acceptance criteria as the team's current resident tester and he's reviewing it, or it might be he's writing it and I'm reviewing it and the rest of the team and asking the hard questions on, what about this scenario? Can you give me an example? Getting those ideas out and then in accepting as well. Of course, product owners need to be participating, but through demos, every stakeholder.

In some ways, I think that everyone is involved in testing. It's like everyone can sing, everyone can test and everyone will test, but not everyone goes on a stage and sings in front of a 2,000 people audience. That's what you expect from the tester. There's like you add just a layer on it, but then again, many eyes are always better than just that single person or single role that is trying to do all of that work.

Alexandra: Even if it's the professional performer with the great singing voice, got your point. You mentioned that sometimes when you test systems, you have this challenge ahead of you that's in one week or even one day, you try to simulate one whole year of data and users actually using this program. For me, then this becomes apparent the data plays quite a critical role in your day-to-day work. What are the challenges around test data, that test is usually phase, especially when it comes to sensitive and personal data like you had in the insurance context, but also in general?

Maaret: Let's start with the general aspect of it. In testing, all of the results are basically valid only until the next change. I think of test results as if they're milk. It goes sour quite quickly. You need to replenish that. To replenish your test results, you need to go back to where you were earlier. You need to get your data to your starting point from where the testing again is easy. That is not an easy thing to get to. That's one of the practical general things that you need to work on.

In specific around the privacy or even the nature of the data, it's not just the privacy, it's the nature of the data, if it has financial information, it's probably going to be different. If it has people's personal information, it's probably going to be different. If the mass of the data is going to tell you some secrets when you see it, it's probably going to be different on how you handle it. You might need to either collect smaller samples or larger samples for your testing or you might need to scramble your data. You might or might not have tools for that or all of this.

Alexandra: Understood. Since you mentioned that data or testing results tend to get sour as quickly as milk, doesn't it? There's a challenge in getting back to the starting point. Can you give us a concrete example so that even non-experts in testing can better understand what you mean with that?

Maaret: Let's imagine we are testing someone, they have a birthday today and there's a feature where on their birthday, they're getting balloons on the screen. We've probably seen this feature. They can only get those balloons if today is their birthday. It might be that we've protected the birthday so that we can tweak it and change it so that it's a moving thing every day is that same person's birthday. We might need to find a new person every single day or the other way around, we might be changing that so that it is available or we might be changing the clock so that it's always today, especially if you think of automation that you're running, your tests don't pass anymore as soon as it's not today. In that sense, going back to what you had and what you needed to see the exact thing, the balloons coming up is something that you need control over.

Alexandra: Understood. Since you mentioned automation and testing, how important is it?

Maaret: It is nowadays really, really important. Personally, I'm one of those people. I call this idea that there's this-- First of all, in terms of exploratory testing which basically means that we are designing new ideas of how we will test while we're testing. A lot of people think that that's not automation centric but the way that I think of it is this contemporary exploratory testing where you make notes, you document your testing, and you make your own future life easier by documenting with automation or extending your reach with automation. Automation is a central part of that.

Actually, there's so much change nowadays that you cannot do the same work exploring unless you have some automation available to you. So many environments, so many different uses. Again, data is a great sample in the sense that you might think that these 2 people or these maybe 10 people, these 10 data samples are saying you think that they are the same but when you run all of them, which you can do quite easily with automation, you discover that one of them wasn't actually the same.

Alexandra: Okay.

Maaret: The surprises that we often look for, when you have that automation as a mechanism, you can find also those surprises and data, it's a great example of one of those dimensions that you really need to play with.

Alexandra: Understood. Can one think of this in a way that you would say with exploratory testing, automation actually gives you the freedom to spend time to continuously explore new possibilities while you automate all the approaches that you already have identified to be valuable to test the system and really find out any weaknesses?

Maaret: A lot of people think of it that way, that it frees time for just trusting that things work, but again, personally, I have more of this idea where seeing what the change is already is giving you that idea. You could still manually explore even if you didn't have automation. What automation gives you is if you think of having a checklist that helps you remember when things are changing, when, for example, we forget to tell our colleagues that we changed something, automation is particularly great in telling that something relevant has changed because it's going to reliably fail if we didn't talk to one another. I think of it more as a communication mechanism than a timesaver.

Alexandra: That's interesting, haven't heard that before. Talking about the data that you use for your testing, which type of data do you usually use? Is it self-constructed test data? Is it masked anonymized synthesized production data? What's the approach that's most needed and most feasible in your day-to-day work?

Maaret: My day-to-day work usually includes system testing. Most of it is the whole integrated system, then what I would look for in the data is realism in terms of what we have in production. Most typically, for me, it would be something that we've copied from production, made some adjustments so that we can use it, or it might be generated so that it matches a production pattern. Of course, I also do smaller-scale testing depending on what is ongoing, what change we're working on. There, I might just actually need very hand-tailored specifically created data.

I have a lot of simulators in my life right now, different ways of creating particular patterns. "How is weather going" because let's face it, our office is 21 degrees and quite consistently that. It doesn't give me much of variation, so I need to control that. Production data in that case doesn't make much sense because you can just regenerate current data. Whereas in the insurance sector, in particular, you really needed the production data because the production data, it's basically history of someone's entire employment since the beginning of their career in Finland. That's 50 years of data. Probably 5, 10, whatever systems on top of that creating the data. There's been many programming mistakes in those systems causing that data to be somehow different in various patterns, and it might or might not have been cleaned up after those mistakes.

The versatility of that data, you really need to find representative samples both for the logic that we want to implement right now so that it works correctly that we can verify that, but also people that have a representative sample of things happening over time that give you the chance of seeing that fast forward feedback that we were looking for.

Alexandra: That sounds interesting. Can you give me a specific example of one of these insurance datasets, why it's so important to have realistic or if privacy allows, even production data because there are some, as you mentioned, maybe errors in the data or some peculiarities that you wouldn't have thought about yourself if you just were to create the test data either yourself or using some tools?

Maaret: Yes, that's definitely one of the things, the realism and the patterns that you didn't plan for, but just on the side of what you can plan for. For example, when I was working in the insurance company, we made a list of 201 different, basically, types of history. We have these business rules where you need to be of particular age, you need to be insured in a particular way, in a particular order. For example, you might need to have a marking in your credit information that you haven't paid your insurance bills. The processing goes differently with all of these different rules.

We had a really long list of representative cases to go through the business rules. Just picking data that would match those 201 sample boxes, it's a huge amount of work. If you don't find some, then, of course, you won't be able to verify if all of the combinations and logics that you had assumed that you would need to verify are going to be working correctly.

Alexandra: Understood. One of the challenges is really coming up with test cases that fulfill all of the complicated business rules that larger organizations tend to have.

Maaret: Yes. In the insurance sector, a lot of times when we talk about test cases, it starts with the data. In some other sectors, when we talk about test cases, it starts with the functionality. In insurance sector, you need that particular starting point of the data so that you can even see the functionality and it's going to give you the rules that are implemented in the system on top of that data.

Alexandra: How to make this distinction whether you start with the data or with the functionality?

Maaret: I've been thinking about this over the years a lot. For a long, long time, I used to think in terms of we have data-intensive and functionality-intensive systems. For example, I was working in F-Secure. I thought that's definitely a functionality-intensive system F-secure does antivirus. It's like file systems and events and scanning was the domain, so there is no databases. Now that I've been, again, around more years, I'm starting to realize that there's things we talk about, NoSQL. Like saving the data in a format that isn't a database, but it's still a collection of things, and that maybe everything actually is in some ways, data-driven and data-intensive anyway.

It's more like there's this basic baseline of functions that you will test, and then you need to sample more things around data. It's like two-dimensional perspective. You have the functionality and the data, and you will only be able to trust your testing results when you have this matrix of putting the two together. I'm thinking, you really can't categorize, data is everywhere. There's just a question of, is it now given as an input or is it given as a part of the processing as a like a hidden input inside the application? There might be differences there, but data is always part of testing.

Alexandra: That makes sense. One thing that would be interesting to me since you mentioned the importance of data. The feedback I get often from large insurance organizations and also banking institutions is that, especially when it comes to testing data, access is something that's super cumbersome for them and takes ages. Is this an experience that you can share or whether your experiences with access to data and the time it takes?

Maaret: There's many troubles that I have had with access to data. I do have the same experience. It also splits down to many different smaller problems. There might be just a technical access, like we don't want to give you access, so even if you asked, we can't give it to you because, for example, you might be a representative of the contracting organization, and we have just a principle that you are not going to get that. It might be that the data that you want access to is owned by another organization, but you still get that data integrated with your own data through an API core basically, because the systems are integrated in some way. That's another like you don't just have rights to someone else's data.

Then there's the tooling and the understanding, it's not always obvious that the database handling tools are user-friendly to all of our business users who would need access to that data. If we give them access, maybe we don't have a separate read and write access, they can do both of them. I have spent so many weeks and weeks in my career fixing even production systems when someone got access and thought they will fix something that is simple and it turned out not to be so simple. We want to maybe limit it and technically, we might not have all those groups of giving just the right access.

Then, still, there's the whole idea with security related to data that we want to be sure that no one tampers important data without a trace. For example, if I've had an insurance company that has the president of Finland insurance, somehow, it's obvious that we need a functionality that I am not going to go and look at the history as a single individual tester, the history of the president of Finland. If I do, at least, they know that I did and there's no way for me to claim that I didn't. There's so many of these dimensions that we might need to take into account that well, it's definitely complicated.

Alexandra: It sounds like that. Since you mentioned the tester or you describing yourself as a testing fairy at the beginning of our conversation, if there would be a testing fairy that would grant you three wishes to become more effective in testing and fast when it comes to data, what would be your wishes? What would be helpful in your day-to-day work?

Maaret: Easy ways of understanding the data, visualizing the data so that I could see how things are same and different. That would be definitely one of my big wishes. Access to production data so that it would be a straightforward way for me to get things from production that I can use for my testing. If it needs scrambling invisible thing in between, something that I understand how it works in that state. The third one, probably being-- Actually, my wish list would be faster moving of data from one place to another. I've also spent quite many hours in waiting for a full database, not even a huge one, but still significant size one, be copied from one place to another place so that I can use it for a particular testing purpose.

Alexandra: That indeed sounds a little bit cumbersome with all the waiting time that is involved with that. Since you mentioned, of course, access to production data, this is something that we hear all over again, but of course, due to GDPR, especially in the European Union, this is something that I think not even a testing fairy can grant, but also the reason why we now focus on synthetic data for the testing space, because it's as good as the production date and therefore, hopefully, something that will contribute to testing. One other thing that I really want to get your input on is what are actually testing best practices that you could share with our listeners so that they could become more effective in their testing, some procedural process that you usually follow.

Maaret: I would say that maybe there's two dimensions that are good to be aware of. First one is that you test on different levels. You test on the unit level, whatever small piece, even the individual database, individual data that you're creating, whatever you're changing there, you test those things that you are implementing on the smallest possible scale, and systematically ask that of your developers because, again, developer intent belongs with the developer. No one else should be trying to decipher what was the developer's intent. They can make it visible with testing on that unit level.

Then the next level is like you put PCs together, you have something bigger, so you're integrating. Some kind of API or service level testing so that you can see that the relevant group of things works together as intended. That's the next level. The third level is the customer's value system perspective that you know that the users are getting out of the entire system, what they are expecting. The smaller pieces, put all of it together into whatever the customers are using and make sure that that that works as well. That's one of the dimensions.

The other dimension that I would think of is the perspectives of testing. We have usually four big ones that we look at. We have functionality, and data is related to functionality very strongly. Then we have security. Again, data is part of security as well, at least making sure that it's protected. We have performance. Yet again, we find data within that. The fourth one is usability. Again, depending on what information you're trying to present to your users, there's probably going to be data-related perspectives to that one as well.

Alexandra: Now I know why your data is everywhere and testing is coming from.

Maaret: I'm just noticing that the things that I'm looking for when I'm testing is variables. Variables and data are almost synonyms, almost, but not quite. I would look for variables that might be temporary and data is something that stays around for longer. In that sense, you always have the temporary things and the long-standing things in all of these dimensions.

Alexandra: Talking about variables, can you give our listeners another tangible example, what type of variable you would be looking for, how you would even detect this kind of variables?

Maaret: Maybe what I'll give you, really an example of an application that I was just teaching on yesterday, it's called E-Primer. It's not a very complicated one. It's just created for testing purposes. It is basically taking English texts and saying whether it follows the rules of what is called E-Prime, no one knows what this really is when we start testing this application. It's trying to avoid use of work to be. In that particular case, what we do is we have a user interface with a little box on it, and we need to write something, the data is an input into that application.

I usually start with things like a demo stream. Like if you try to be or not to be Hamlet's dilemma, then should show you some red and some blue and you start understanding the application. Then I ask, what is it? What are the other pieces that you might need to test there? Maybe someone wants to try just text without this B verb and definitely, but no problems found, but as soon as they put in two words with the line change, they find a bug. As soon as they put in your, like two words put together with the third person per role, they find a bug. As long as they put long text that is horizontal, they find yet another bug. Long text vertical, similar thing, and again, I can continue the list.

The practical thing that you need to do with these variables is identify the specific different samples that are essentially different, that will show you some different application or feature of the application. It might be how the screen is laid out, like these horizontal and vertical scroll bars, or it might be it just doesn't process it the way that you intended it to process it.

Alexandra: Understood. This brings another question to my mind. How is actually user input over the course of testing integrated into a product? Thinking of applications where the developers and product developer thought about a specific mechanism, a specific tool to be used in a certain way, but then users tended to use it completely differently. Do you have an example from your vast history of testing there how user feedback is integrated and maybe even then expands the capabilities or aspects of a tool or software?

Maaret: Well, that's the area of usability and user experience. There's so many different types of usability tests that you can do. The easy or the last ones that usually people start from, the ones that are part of the dynamic test execution is having maybe a user part of your testing. Maybe you could find a real customer and maybe you could pair up with them or team up with them, and even test the feature, acceptance testing from their perspective.

If they find big things at that point, you've already implemented everything, so it wastes a lot of time, not the best idea to just rely on that, but it's definitely one of those things that you could do. Going earlier, it's the same thing. You can go in when you have just a sketch and show the customers, how would you use this, and what feedback do you have, and collect this information that way?

Alexandra: Sure. I would say that the general best practice of product development, that you integrate the users right from the beginning or the intended users right from the beginning to develop something for three years, which then potentially goes and not directly to customer user needs. Before we come to a close, the last question for you, Maaret, would be how do you see the future of testing? What will change? What's up next? What are your expectations?

Maaret: I would expect that we get still more technologies in, there's going to be even more different kinds of systems that we need to be testing. We need to combine the information that we might already partially at least have in new and unique ways. I would expect the field stays interesting and important in that sense.

Maybe some of the technologies, particularly around AI, are going to be changing the future so that instead of having programmers, we'll have programmer testers who are guiding the machine in learning. They're actually more tester-like people in that sense. It's going to be even more muddled on who's the tester and who's not. Again, that's why I'm saying that testing is too important to be left just for testers, it's everywhere here.

Alexandra: Makes sense. What about continuous testing, and also the frequency of releases? I can remember from one of our earlier conversations that you also predicted or indicated that over the course of your career, there has been quite some change. Maybe you want to expand a little on that as well.

Maaret: Again, looking back 25 years, it's definitely been so that we used to do infrequent releases. Now, it is very, very frequent. In some teams, it might be weekly. In some teams, it might be daily. In some, it might be monthly, but all these shorter cadences. They didn't use to exist. I would say that I believe that we're going to be doing more. Like, you do the smallest possible change, maybe it's going to be a few hours. Then you already get that all the way to your customers and that's going to be more popular.

It's already this way that many companies work, but I think it's going to be coming to more organizations. Again, for me looking at future, I think that the future is already here, but it's just not equally divided. This is all those famous quotes, but I think it rings true so well in this industry that always when you look around, you find your future from one of your neighbors.

Alexandra: Understood. Speed is becoming increasingly important for some testers and some very pioneering organizations when it comes to testing. It might already, it's very important.

Maaret: I don't think of it necessarily as speed is becoming important. I think of it as in, staying within a controllable reach of a baseline that you know that works, that is becoming more important. Then yes, it gives you speed, but it also gives you quality.

Alexandra: Okay. Understood.

Maaret: When you know what you have, that makes it easier to get back to a good state when you never take too many steps away from it.

Alexandra: Understood. To really make continuous releases and frequent releases and continuous testing a reality, what needs to be in place for an organization to operate in this mechanism?

Maaret: What needs to be in place is this idea that you identify small steps of value, and you are disciplined in taking those smaller steps, so not trying to do a big chunk. There's also-- I think I like milk-related metaphors a lot. There's this whole idea that when you buy software, and when it comes to small cartons, small, like milk cartons, it's cheaper that way.

We think because of milk, this is when it's not like milk at all. In milk, when you buy a bigger one, you get it cheaper. Software doesn't work that way. It's exactly the opposite. What needs to be in place is this idea that you're buying small pieces, you are continuously adding small pieces. Then you are carefully making sure that the small pieces are taking you to the future that you want to be at with your own application.

Alexandra: Now, it makes sense. Well, I initially said, I will only ask you one question. Now. I think it were like five follow-up questions, but thanks a lot for explaining it. Thanks a lot for your predictions for the future of testing, and especially for all the tangible metaphors that you shared with me and our listeners today. Thank you so much for taking the time, Maaret, and giving us insights into the, as you described it, very exciting field of testing.

Maaret: Thanks. It was great being here. Thanks for having me.

Alexandra: Sure.

Alexandra: Maaret's passion for testing is shared by a vast community of testers and also QA engineers. Understanding complicated systems and finding faults is indeed a fascinating investigative work and data is a huge part of it. As Maaret said, there's never a dull day in testing, and I can truly believe her after what we heard today.

Testing is about fast-forwarding the customer experience and simulating the data future of an application. For the customers of MOSTLY AI, synthetic test data generation is one of the most popular use cases since synthetic data is a perfect realistic replacement for production data, and getting access to test data or generating it with other tools is highly cumbersome.

Ready to try synthetic data generation?

The best way to learn about synthetic data is to experiment with synthetic data generation. Try it for free or get in touch with our sales team for a demo.
magnifiercross