Back to the list
Antoine van der Lee
Episode 14 - December 23, 2019 - 24:57

Inside WeTransfer’s App Testing Process with Antoine van der Lee

Featuring Antoine van der Lee, founder of SwiftLee
Apple Podcasts Google Podcasts Spotify Youtube

It’s not every day that you get to peek inside the inner workings of a major tech company like WeTransfer. But today, I had the absolute pleasure of chatting with Antoine van der Lee about his work as a lead iOS engineer at the file transfer company.

Antoine van der Lee, who lives in Amsterdam, is also the founder of SwiftLee, a weekly blog jam-packed with useful Swift, iOS, and Xcode tips.  

During our chat, Antoine revealed:

  • Why WeTransfer uses unit tests, not UI tests
  • What the company’s continuous integration set-up looks like
  • How WeTransfer structures its release train

Listen to our entire conversation below, and check out my favorite parts in the episode highlights!

Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends!

Highlights from this Episode

Darko: (00:02) Hello and welcome to Semaphore Uncut, a podcast where we talk about development, continuous integration, and general good testing practices. Today we have with us Antoine van der Lee from Swiftly. Antoine, thank you so much for joining us. Feel free to go ahead and introduce yourself.

Antoine: I’m Antone van der Lee, as you said, from the blog called SwiftLee where I have been blogging every week since May 2018. It’s already been a while. Apart from that, I’m also working at WeTransfer here in Amsterdam where I’m developing the Collect by WeTransfer application fully into Swift and iOS development. Really eager to talk about that.

Darko: Okay, great. Before we continue, can you give us just a bit more about the product that you’re working on and the team that is developing it?

Antoine: Yeah, totally. So we’re a team based in Amsterdam. We also have offices in New York and LA. The team I’m working on in Amsterdam has about 12 people including designers, back-enders, an Android team, and an iOS team. The Android team has two people, and I’m on the team of three iOS developers, which includes me, two designers, two back-enders, one front-ender. 

We’re developing the Collect by WeTransfer app, which is a multi-platform application where you can collect all kinds of content and share it in an official way with everyone you want to share it with.

Antoine’s journey to automated iOS testing

Darko: (01:32) Great. Thanks for sharing that. So you came into the iOS world first in Objective C and then in Swift for 10 years. I don’t have any firsthand experience with iOS development. So, my kind of plan is to ride this episode as a noob who wants to get into the iOS development world. 

Generally, when you start, the first couple of months or even years are spent developing without writing too many tests. How did you enter the area of automated testing for iOS and how it’s changed since you started?

Antoine: That’s a great question. I’ve been in this world for 10 years, but I definitely haven’t been writing tests for 10 years. I think that’s partly because as a junior you start writing applications first. That’s already a big chunk to learn. And once you go ahead and learn more about building applications, you start to realize that writing tests is actually a really nice tool to make your apps more stable and to come to a great solution in an easier way. 

So, 10 years ago I started as a Windows phone developer with C Sharp, and I had some great people around to teach me the basics of creating well-structured applications. But my heart was with Apple. 

At that time I started developing iOS applications at an agency. I joined another agency after a year, which developed quite a lot of applications within a year. But the main problem there is that you already need a lot of time to build the actual application. And there’s not really a lot of time to write tests, which is also the reason why I didn’t really write tests for 10 years.

I think that mostly started when I developed a few frameworks for the agency which were used in quite a lot of applications. So, it’s more important that your application is truly tested and you are pretty sure that the core logic of all the applications is working fine. That’s where I started writing my first test, basically. 

Once I started at WeTransfer, the whole paradigm of writing tests changed for me. WeTransfer is a product company, right? So, we develop an application which lasts for a lot longer. We have way more time to write an actual good solution. We do test-driven development where tests are really important to keep the product stable.

How WeTransfer tests iOS applications

Darko: (04:10) I can totally connect with that. There are agencies working for clients, which is a bit rougher in terms of timelines and budgets. When it’s a product company, much more craft is welcomed because it should last much longer and it’s going to be ours in couple of years and we’ll have to maintain it. I bet a lot of our listeners can connect to that also. 

And in terms of unit testing and doing some integration tests–or in the case of mobile apps, UI tests–how do you see that split there? There is generally a testing pyramid, which is advocating for having a lot of fast unit tests and then a couple of slow integration or unit tests which are touching many parts of your application. How does that look for your team?

Antoine: Yeah, so I think you’re hitting a spot there. There are multiple ways of writing tests and I think everybody has their own opinion about it. Looking at WeTransfer, we decided to only go for unit tests right now. The reason for that is that UI tests are simply unstable and really hard to maintain, especially if you have a young project in which the UI is changing quite a lot. It’s often easier to maintain unit tests which really test the business logic instead of the flows of the UI. 

Obviously a downside of that is that your UI is not always tested as truly as your business logic, but we decided to go for a great QA team who does that for us. They are looking further in the future. Ideally we would have replaced the QA part more or less with UI tests to make sure all the flows are working as expected.

How the introduction of Swift impacted app testing

Darko: (05:56) In that journey, you can always opt in for something where you test the majority of the stuff with the automated test but still manually test some parts. That actually brings us to something that we’ll talk more about later: a release cycle. But before we jump to that, let me ask another question. In the last couple of years, how has the Swift language influence testing, if at all?

Antoine: I think it’s not Swift changing the way testing changed, per se, but it’s mainly Apple dedicating way more time toward running great testing solutions. So, if we look back at the past two years, XCode has had quite a lot of new additions to it. We have, for example, XCode Server which allows you to run the test on a server for more integrated into XCode. While we also have XCode bots and Test Plans which are introduced this year. 

It shows that Apple is way more focused on making sure tests are more suitable for what you expect for tests nowadays. Take, for example, running parallel tests or randomized tests. Those are things which are quite normal in the web world, as well as the flakiness of tests, right? So, if you run test synchronous, so every time you run the same order of tests, it could definitely be that your tests succeed. 

But when you enable parallelization or randomization of the test, suddenly there could be wait edge cases you didn’t see before. Those are just two examples of what Apple added over the years. And if you look back eight years ago, we definitely didn’t have that and the code was a lot less stable in running tests as well.

Darko: Yeah. Something similar is true also for the web world, as you mentioned. I know that in the beginning, especially with those UI tests, there were a lot of dependencies about the order in which they are run. Parallelization is one of the things that actually uncovered a lot of rotten tests.

Antoine: Yeah. And if you look at the state where it is right now, I know from a lot of other developers that it kind of works if you run them locally but also on continuous integration (CI) systems. The biggest problem we have right now in the Swift environment and maybe as well and Objective C environment, I think, is that tests succeed locally but they fail on CI or they have different behavior on CI. And that’s, I think, one of the biggest slow downs for us as well at WeTransfer.

Darko: What you just said in these last two sentences kind of describes my whole career. We have been doing this hosted CI platform for nine years almost. To be honest, over the years we have documented quite a lot of those things and the recommendations and so on. 

It seems that the industry as a whole has also learned that it is something that you would experience. The problem is that it’s a completely different environment. Who knows what you tweaked two years ago on your local machine, which is quite different from the CI environment. One thing that we have in the web dev world is Docker.

That helped a lot because you can have a particular version of, let’s say, Ruby that you compiled into a very specific version of Leap SSL and so on. Those small, annoying things of one minor version difference have been eliminated, at least in the Docker world. So, are those small-version mismatches a source of that behavior in iOS development, or do you find that something else is a major reason for that?

Antoine: Well, I think looking at what we experience right now at WeTransfer, it’s more often caused by ourselves than the CI solution. So, if we, for example, take a look at how CI runs tests, it always runs from scratch, right? It checks out the project, sets it all up, installs dependencies like gems, and then it starts executing the test in a certain way on a simulator which is cleaned up. 

If you look at the development process, we often run the test in a simulator, which already had the app installed. It’s not a clean build. We don’t update the gems before running the tests. So actually what you described with the Docker example is that the environments are not matching. 

So yeah, looking at what we have right now at WeTransfer, it’s often the case of making sure the environments match, that we test on a clean simulator, and then it turns out that for most cases the tests were just not set up correctly and it’s just a development mistake. It’s a lot of scenarios which can influence the fact of unstable tests, more or less.

Darko: Yeah, some of these areas are like one-time investments, which is great. I guess there are also some areas which can pop up randomly during the process.

WeTransfer’s Continuous Integration Setup

Darko: (10:58) While developing features or fixing a bug, what are the things that you and your colleagues usually run locally and what are the things that you run in CI?

Antoine: We have a big CI setup, you might say. What we at least say to each other is “Run the test locally before you open the pull request.” Essentially, make sure the tests succeed locally. That’s the least you can do. And it also ensures you didn’t break anything else. 

So, once we open the pull request, then the continuous integration system starts and runs the test from the CI. That brings us all kinds of information which you might normally reply manually on a pull request. This also refers back to a talk I gave on speeding up development as an iOS developer. I discussed how it’s really important to automate things to save time. 

And at first it takes quite some time to set up the automation, right? But it will eventually save you time because you’ll have to do less. One of the examples we do at WeTransfer is automated pull request reviews.

Things like, “Hey, you should use a colon here,” or, “Use steps instead of four spaces.” All those kinds of things you would normally manually reply to a pull request are now automated using linters and that saves us a lot of time. 

We also have code coverage and warning reports. We ask for a change log entry, a pull request description, and things like that to help us to know for sure that those things are already done when we start reviewing a pull request–which saves a lot of time.

Darko: Doing things manually that can be automated can save a lot of time, and some of those things are just annoying. Probably also for the reviewer and for the person who is getting the feedback. It’s easier to fight with a robot on the other side than with people I guess.

Antoine’s CI/CD Boilerplate Open-Source Project

Darko: (13:00) And during the prep talk, you mentioned that over time you developed kind of a boilerplate project. Can you share a bit about how you use it and what’s included?

Antoine: Yeah, so a bit of context. WeTransfer is really driven to use open source frameworks if we can. So, last week we had a hackathon, and I developed another framework. I think the whole CI system was set up in maybe 15 minutes, which is quite fast. And the reason for that is that we created a repository on GitHub, which is open sourced. 

It’s called, WeTransfer iOS CI, and it basically includes all our logic, which we use for all our repositories. It’s what we use for both our internal projects our open-source projects. What it does is include a fastlane file, which we can reference from the open-source project or internal project.

Antoine: And that fastlane file will trigger things like danger, which reports errors, warnings, and change log entry. It records the linter as well. It triggers all our integrations we have set up for pull requests reviews It’s really easy to integrate it because we can just make sure to execute the fastlane from our fastlane integration within the project itself. 

At the strip module, simply reference the fastlane file, make sure the gems are installed, and trigger it when the pull request is opened. So yeah, that’s saving us a lot of time with a lot of projects. Obviously, it’s also a big time saver for when you have to maintain several CI solutions. 

So we have around six open-source projects which use the same CI setup. Before we had this solution, we were updating fastlane and updating the gems all per repository. So, you can imagine that it was difficult for each task, which was multiplied by six, as we have six open-source projects. 

That was the point where we said, “Hey, we’re going to write an open-source solution and we make sure that maintaining the CI integration in those projects will be a lot easier.”

Darko: Yeah, most of our development is in the microservices area. I can totally relate to changing a couple and updating some things, but then there are five more repositories that would need to be updated also. And if you can make that layer, which is transferable everywhere–that’s fantastic.

Antoine: Yeah, exactly.

Darko: And maintaining that boilerplate. Is that something that you maintain yourself or is the community involved already? Or do you have any plans to popularize it more?

Antoine: So, right now it’s quite dedicated to our way of reviewing PR requests and our code guidelines, more or less. It didn’t get as much traction as I hoped. But I understand that it’s not really used by others and that’s mainly because it works for us. 

I think many others have their own way of setting up CI and they might not want all the reports we have in place. So, maintenance is mainly done by me and by others in the team for now. But maybe if after this podcast recording I got a few more contributors, that would be great!

Darko: It can be just an inspiration for others to make something similar. That would be fantastic also.

Antoine: I think that’s one of the things where it’s being used for. Our danger file is also open sourced on the danger website so people can get inspiration from it and basically copy from the rules they want to have as well. So, in that sense, it’s been reused by others as well.

All aboard WeTransfer’s release train

Darko: (16:42) Great. One thing that was very interesting for me when I heard about it is how you ship your mobile apps every Monday if you calculate release train. Can you give us a brief overview of how that works internally for you?

Antoine: I think I can start explaining why we started building the release train. When we developed updates for our application, we were really tempting to say, “Okay, let’s reschedule the release so we can fit in this PR as well.” And we delayed the release with a few days and that often happened more and more times where it’s kind of lame more or less because if that gets delayed even more than you have a week without a release. 

So, we started to sit down with a team and we were like, “Hey, can we make this more efficient as well as more automated?” because at the same time we were doing other releases manually. We had to create an archive, submitted to app store connect, do all the metadata in app store connect manually, and then we submitted the build. If you compare doing that manually to doing it automatically, it can save you a lot of time.

We started setting up a plan for our release train, which basically means that every Monday there’s an automation started from CI doing the whole app submission, which is basically using fastlane. We make sure that we deliver all the metadata, we get a build from GitHub, and we make sure that it’s actually submitted for review pending developer so we can release it ourselves in the end. 

So, how does it work? We have a QA team which gets a new build every day. Every evening around 7:00, we trigger a new build delivery and include the change log so our QA team knows exactly what has been changed and they will run through the whole task plan we have set up with them. 

The next morning, we come in the office and we basically hear from them that the build is green-lighted, as we say. We then go to the GitHub releases page and see the same generated build from the day before, but it’s marked as a pre-release.

Basically, what QA can do there is they say, “Hey, this build is good enough, it’s stable, and it’s release-ready. We marked this build as green.” And the way we do that is basically by unchecking the pre-release check mark, which makes it an actual release candidate. 

Say, for example, this is on a Wednesday. Then on Thursday there is another build delivered. QA starts testing it, and if they think the build is good enough, they mark that build as green as well. This basically invalidates the build from the day before, since this is a newer green-light build. And then we get to Monday. Every Monday at 10:00 am, we have the actual release train, and that triggers a check where it takes the last marked green-light build release on our GitHub page and uses that to submit to the app store.

And the great thing about this is that no matter what, we will release the last stable build on a Monday. And if it happens that your PR is in the build, it will be released. If not it will be released the very next Monday, which is at most a week away. 

So, that took away a whole lot of discussion about when to release. Everybody on the team knows that every Monday there’s a new release coming, and we don’t have to think about releasing anymore. We only have to check if the CI run succeeded and whether the build was actually approved by Apple. And eventually, we need to press the release button. Then we’re all set.

Darko: (20:29) While you were talking about this, I was kind of keeping track of all the things that it’s solved, but it seems that above all it synchronized the whole team. There are no discussions, but it gives a high-level of predictability for everyone in the team. Has anything in the team mechanics changed in terms of how you structure your week and your work?

Antoine: Well, since we want to have a green-light build on Monday, we want to make sure that we deliver a new build every day. We know that if we open up a PR on a Friday, there isn’t enough time for QA to truly test it in time. So, the PR will not be included in the upcoming release, more or less. 

I think it’s more a mindset that has been changed and it took away a whole lot of stress. Otherwise you might rush your PR on a Friday and still merge it in because you can do the release manually yourself on Monday. While it might not have been tested truly, it’s just how it is right now. We know we release on Monday and that’s it. We can’t do anything about it.

Darko: One other practice that you introduced is that you try to have a daily release also, right? So it’s not that for three days you don’t have a release.

Antoine: Yeah, so that keeps the whole team in the loop so they know what we’ve been working on. They can test it early, and QA always tests the latest available build. Looking back, we maybe never delivered a test flight build, or once a week and the only build our team would see was the build we would actually release. It’s the whole thing together. Connecting QA to the integration of our team and connecting the team to our test flight. It’s a whole lot more feasible for everybody.

Darko: (24:42) Okay, great. Well, it was fantastic to talk to you about all these practices and how they came about. Thanks so much for joining us and sharing all this.

Antoine: It was a pleasure to be here. Thanks for inviting me over to talk about this great topic.

Meet the host

Darko Fabijan

Darko, co-founder of Semaphore, enjoys breaking new ground and exploring tools and ideas that improve developer lives. He enjoys finding the best technical solutions with his engineering team at Semaphore. In his spare time, you’ll find him cooking, hiking and gardening indoors.

twitter logolinkedin logo