A few years ago, CERN choose ownCloud as a component to build an innovative sync and share service at the LHC scale. After integrating ownCloud into their architecture they needed a deeper insight into the functioning of the sync client, which led to the development of the Smashbox automated testing tool. Today ownCloud has started to adopt this versatile tool for functional testing. In this post we will discuss the origins of Smashbox with CERN IT engineer and Smashbox developer Jakub ‘Kuba’ Moscicki and how Smashbox is being used in ownCloud with some members of the ownCloud Development and QA team. Next week, you’ll hear more about how Smashbox works and how you can get involved.
Origins of Smashbox
ownCloud has been used at CERN for two years and its usage is continually increasing. Due to the challenging requirements of the researchers and the infrastructure to reliably deal with the massive amounts of data coming from the scientific devices in place at CERN, the ownCloud in use has been modified in several places. The architecture of the sync and share service at CERN, CERNBox, fuses the synchronization and web access layer provided by ownCloud with the peta-byte storage system, EOS, used to handle LHC data.
Operating a service at this scale and complexity requires advanced Quality Assurance tools in order to verify operational state of the service as well as manage service updates. This is especially true for software running on a user system as problems there leave little room for IT to help the user, unlike on the servers where backups and other recovery mechanisms are in place. Moreover, at CERN scale even obscure corner cases become common and problems require time-consuming direct interaction between IT personnel and users. And on top of that, the multi-platform nature of the client and the lack of control service providers have over the user systems further increases the number of possible versions and variations of clients interacting with the servers.
The idea started when the CERN team had developed some scripts with test cases that exercised the syncing. When a new configuration was implemented, the scripts were run. More interesting cases were added, a name was invented and automated Smashbox testing was born.
Smashbox at CERN
Smashbox testing became a routine before any new sync clients were added to the environment. The checks are described by Kuba as ‘very mean.’ The team tries to come up with scenarios which are difficult to handle. Tests are run continuously and in varying conditions. For example, hard to reproduce issues can depend on a timing dependent problem which only happens under very heavy load on the server, requiring many hours of running the tests under a high, simulated load in order to trigger the problem often enough so they can gather data and the ownCloud Development team can track the issue down and fix it.
Smashbox and ownCloud
ownClouders first got wind of the existence of Smashbox when it was demoed at a meeting at CERN. Klaas Freitag from the client team was particularly interested, as it allowed easy testing of the API used by the client to talk to the server. Next, the CERN Smashbox repo got forked and several ownCloud developers and testers got working by writing tests and adding support for new API’s. It is currently in the public ‘smashbox’ repository on github and there is ongoing work to merge our changes back into the upstream Smashbox project. As most of the work is in the tests, rather than the framework itself, this shouldn’t be a big problem, though the entanglement of tests and framework in the current repository would probably benefit from separation.
Kuba is glad that ownCloud has picked up Smashbox. As he points out, CERN has a culture of sharing and collaborating.
Read here Part II where we dive a little deeper into how Smashbox works, what it is being used for, and where it will go in ownCloud.
Thanks to CERN and Jakub Moscicki for their time to answer our questions