All Blog Posts

Trifacta Legend May 2022:
Mario Truss & Armin Meyer at Seibert Media

July 1, 2022

Trifacta Legends recognizes customers every month who are doing groundbreaking work with data using Trifacta.

We’re pleased to announce the Trifacta Legends for May 2022: Mario Truss & Armin Meyer from Seibert Media

Mario Truss is a Product Owner of Customer Data Engineering, and Armin Meyer is a Service Owner of Tools & Data at Seibert Media. Besides being a data nerd, Mario loves music and teaching things to people. 

Armin has been focused on agile methods for 10+ years, and has been working for the past 3 years to enhance the usage of data, tools and processes at Seibert Media. Aside from working with data, Armin is an avid skier.

We talked to Mario & Armin about their experience facilitating data modernization and democratization at Seibert Media. They shared some of the challenges they faced, and how they overcame them with Dataprep’s self-service solution.


Trifacta: Armin, can you tell us about your business at Seibert Media?

Armin: We provide some of the best selling apps in the Atlassian marketplace. Some of our solutions include, Linchpin, and Agile Hive. We also do consultancy, hosting, and license management for a lot of customers. We are well-known in the German-speaking region. We focus on team collaboration tools, and Mario & I work on the internal data management, data engineering, & business intelligence team. 


Trifacta: Can you help us understand your data engineering journey, what technologies you use to help you achieve your objectives, and why?

Armin: As a Google Cloud Partner, we focus on doing this in Google Cloud, which is why we use Dataprep by Trifacta. But prior to that, we have always been hands on with our data, and so we had a lot of manual processes to do reporting and controlling. One advantage this brought was that we didn’t have a lot of on-premise things in the data field, so we could go directly to the cloud before it was a “hot” thing. But a disadvantage was that, when you do things manually, you face a lot of problems. It’s a lot of work, you have to regularly pull the data out of the systems, and your reports are static and quickly become obsolete. And everything you do on your reports is very costly. So we wanted to get better at using the data we had.


Trifacta: What were some of the first steps you took towards eliminating some of these manual processes to make better use of your data?

Armin: We started with some groundwork, using Kafka to extract data continuously from all of the systems that had relevant data for us. Then we pulled this data to Google BigQuery and systematically started transforming it and processing it with Dataprep. As an output, we sent this to Data Studio. 


Trifacta: And under this system, you were able to automate tasks that were previously quite manual?

Armin: That’s correct. We were able to replace a lot of manual work, like pulling the data out of the systems, and Dataprep made the data transformations automated in a lot of cases, or at least a lot faster.


Trifacta: That’s great. What have been some of the business impacts of this shift, both now and going forward?

Armin: We have a BI team that creates data and does the job for the business people. They raise the questions, and we do the data transformation. The next step where we want to go, and where Dataprep is essential, is to provide self-service capabilities to our analytics and also to our data integration into the other operative systems. So the BI team can now focus on doing things like semantic models of our main data objects, where they model dimensions and common metrics, and then let the users do the rest of the job themselves to get the insights they need.


Trifacta: That’s wonderful. Sounds like you guys have gone a long way towards democratization! Mario, are you able to share any more details on what the end-to-end data engineering process looks like for you at Seibert Media?

Mario: First we have data sources, which can be APIs, CSV files, or other data. We often have to deal with a lot of different data and a lot of varying data quality. We utilize Apache Kafka to bring data inside of BigQuery. After that, we load that data inside of our BigQuery data warehouse. That’s just raw data coming from the systems. Almost in every use case we use some sort of transformation or enrichment rather than just our raw data. And that’s where Dataprep comes into place. Once we transform our data, we sync it back to our BigQuery data warehouse, and afterwards we facilitate our analytics purposes inside of Google Data Studio. 


Trifacta: You mentioned that Dataprep is a key part of this process. What makes Dataprep so essential for your team?

Mario: Dataprep makes it possible for people like myself, who don’t have a computer science background, to make ETL transformations to the data and put it into the format that we need in order to use it afterwards. It allows us to make those transformations without code and makes it more accessible to people who maybe aren’t able to write perfect SQL or some other programming language to process data.

Armin: To add on to that, we started around 3 years ago, and on the first stage we did all of the transformation and processing in BigQuery itself. But what we faced is that you needed really experienced people to do this. And this is when we established Dataprep, which was really a game changer for us, because we could get in people who weren’t so experienced with writing routines and SQL queries. So it’s now much easier to find people to do the job, and it’s quicker to do the job. 


Trifacta: So how have you achieved your goals as a company through this modernization and democratization process?

Mario: One of our goals as a company is to become a data-driven organization or company. And we believe that we can only become that if non-technical people have some sort of interface to interact with that data. And Dataprep’s low-code, no-code tool makes that possible. The democratization and the transformation of the data is extremely valuable in making the data accessible to business users so that they can get insights. The BI team can try things out by themselves without being dependent on us. And we can streamline the whole process so we don’t have to rely on a multitude of tools.


Trifacta: That’s wonderful! Mario and Armin, thanks again for sharing your story.