As the CTO of UniteSync, can you tell us about your background and the unique expertise you bring to the role?
Sure. My professional experience in software development spans more than ten years alongside my Master’s in Applied Informatics from the University of Hradec Králové in the Czech Republic. My professional experience started with full-stack development, but I have since focused on developing web applications that scale and automate data-intensive workflows, particularly for systems handling big data volumes at high-reliability levels.
My previous work involved developing tools for creators and businesses through internal analytics systems, automation platforms, and workflow optimization solutions. The lessons learned from product development have taught me to design solutions with performance, automation, and scalability in mind. I bring my established approach to UniteSync. The platform at UniteSync functions to assist music creators with royalty collection in the complex worldwide rights management system.
The system handles disorganized and inconsistent metadata, which requires precise matching at large scales. As CTO, I lead development while making sure the technology effectively addresses real-world problems through efficient and transparent solutions. I focus on developing systems which deliver intelligent performance. UniteSync achieved rapid growth because we built systems that operate efficiently while maintaining lean operations.
What inspired your journey from being a developer to becoming a CTO, and what were some pivotal moments in your career transition?
Great question. My path to becoming CTO from developer position did not follow a predetermined plan since it developed organically through my curiosity about solving problems that extended beyond coding. During my early career phase, I enjoyed creating new systems while discovering ways to boost their operational efficiency and scalability. I understood during my career development that most technical issues I addressed represented larger product and organizational problems.
My desire to solve organizational problems led me to develop strategic thinking about product-market fit and business goal alignment with technical decision-making. A major turning point occurred when I developed a creator tool that suddenly became popular. The rapid growth revealed every system flaw I had implemented. The situation compelled me to pause and consider methods for designing systems that would maintain functionality regardless of the 10x user increase. This transformation in mindset became a major turning point for me.
I met Carlos, who became my co-founder at UniteSync, during a defining moment of our partnership. The business vision combined with his industry expertise attracted me to create a meaningful impact through our joint efforts. My role as CTO transitioned from code writing to product development, team leadership, and strategic planning. I continue to write code but dedicate equal time to developing strategies for system expansion as well as process development and cultural growth. My current goal is to establish a technology organization that enables a lean team to tackle significant challenges through smart and efficient solutions that address genuine problems of real people.
You’ve mentioned facing challenges with undocumented API limitations during the MVP development at UniteSync. Can you share how this experience shaped your approach to building scalable SaaS platforms?
During the MVP phase of UniteSync, we encountered several unexpected problems with third-party APIs, especially when dealing with undocumented rate limits, inconsistent data formatting, and silent failures. Every surprise on their end directly impacted our reliability since these APIs were critical to our data ingestion and royalty-matching processes.
We started by manually working to resolve the problems through request retries and delays, but we soon realized this solution would not work at scale. The experience formed my understanding of how resilience and abstraction should be implemented in SaaS architecture. Here’s how it influenced our long-term approach:
Defensive engineering became the default. We adopted a principle that every third-party system should be treated as unreliable until proven otherwise. Every integration received a retry logic system with timeout handlers and circuit breakers and detailed logging functionality. Our queuing system dynamically throttled requests through response behavior monitoring, which exceeded documented limits.
Modular architecture with fallback options. We created clean abstractions to separate core logic from external dependencies. The system maintains normal functionality even when one provider fails because we established a failover system to route requests through another provider. The system’s ability to reduce downtime improved our agility when we needed to expand our provider options or switch between them.
Visibility and alerting. We developed internal dashboards which tracked API response times and error rates and data quality metrics for all integrations. The system enabled us to detect issues before user complaints arrived so we could resolve problems swiftly. Silent failures were no longer silent.
The difficult initial experiences pushed us to adopt a platform mindset instead of remaining a product-focused organization. Every component of our system now includes design elements for future expansion together with failure tolerance and scalability features. UniteSync has become more robust, scalable, and trustworthy for users because of these difficult lessons learned.
UniteSync deals with sensitive rights data across multiple territories. How do you ensure data security and compliance while maintaining system performance?
Absolutely. UniteSync provides sensitive rights data management across multiple territories through an architecture that makes data security and compliance fundamental components of its core product design. Our initial understanding revealed that building user trust required us to develop both secure data protection and a fast platform.
Here’s how we approach it:
1. Data encryption at every layer
All data transiting through our system uses TLS 1.3 encryption, and we store it with AES-256 encryption. Our databases, together with backups and internal logs, receive encryption as a standard security measure. Personal identifiers and royalty-related financial data require secure internal service communication because of its sensitive nature.
2. Role-based access control and audit trails
The access to sensitive information remains tightly regulated. Our system implements least-privilege principles by dividing user data from financial information and operational tools through fine-grained permissions. Critical data modifications trigger logging, which supports audit trails, especially when working with societies and PROs across various jurisdictions.
3. Territory-aware compliance
Every region has its unique data protection regulations, including GDPR across EU territory and CCPA in California and other territories. Our system automatically adjusts its data handling according to the requirements of different regions. User data gets isolated according to regional requirements, and our system provides complete data export along with deletion capabilities for compliance purposes.
4. Performance with isolation
Our system maintains its speed through asynchronous job queues and data caching, along with scoped data access, which limits users to only load necessary information. Our system maintains its speed through reduced potential breach entry points.
5. Regular audits and threat modeling
Security reviews occur continuously while our team runs breach simulation tests to evaluate defense systems. The company works with legal advisors to monitor regulatory changes and adapt our policies accordingly.
Our mission involves uniting fintech-level security with music business adaptability by allowing data movement across necessary channels without sacrificing protection or performance. Clients trust us to manage their royalties because protecting something this important is not simple.
You’ve integrated AI into backend systems at UniteSync. Can you describe a specific project where AI significantly improved your operations, and what lessons did you learn from the implementation process?
At UniteSync, we have implemented one of the most significant AI-driven projects through an automated work-matching engine, which we use to combine musical works that exist in multiple fragmented datasets, including PRO registrations, DSP reports, and client catalog submissions.
Our previous system lacked automation because work matching required manual effort that generated frequent errors. The matching process became problematic because song titles contained minor variations, while writer names were frequently shortened or contained spelling mistakes, and different data formats existed between sources. This created a bottleneck in onboarding and royalty recovery.
How AI improved operations:
Our AI tool combines fuzzy matching with NLP analysis and context-aware logic to analyze metadata between sources by measuring title variation similarity, writer alias consistency, duration, ISWC/ISRC clues, and label information.
The system now:
– Performs accurate work matching for 80–90% of incoming works.
– Only difficult cases need human evaluation after the system flags them.
– Identifies cases of royalty underreporting together with instances of missing rights and work duplicates.
Impact:
The process, which used to require multiple hours of manual work per client, now requires only minutes of processing time. Our business can now handle a larger number of clients through automation without requiring additional operational staff. Better recovery rates occur because the system’s precise matching produces fewer lost royalty possibilities for our user base.
Lessons learned:
1. Domain context is everything
We needed to modify standard AI tools by developing industry-specific logic for handling music industry metadata behavior. Minor variations in writer credits together with version titles lead to major changes in the received royalties.
2. Human-in-the-loop systems work best
Human operators remain essential for our operations because we avoided complete automation of our processes. The team now concentrates on exceptions and decision-making after we eliminated repetitive tasks through our system. That’s where AI is most effective.
3. Start simple, then train smarter
We implemented rule-based logic at first, then introduced machine learning capabilities after accumulating more data points. Our system delivered accurate results since the beginning without requiring extensive complexity in its design.
The application of AI technology transformed a manual bo
As someone passionate about efficient tooling, what’s your strategy for evaluating and introducing new tools to your team without disrupting workflow?
That’s a great question and something that is very important to me. I have always thought that tools should ease your workflow rather than complicate it, so I take a very thoughtful approach when bringing anything new to the team.
Here’s my strategy:
1. Begin with the problem, not the tool
Before assessing any new tool, I ask: Which bottleneck do we need to solve? If a tool does not clearly relate to a current bottleneck—whether it is deployment delays, testing inefficiencies, or visibility gaps—I won’t even try it out. Efficiency is about fixing the root causes and not adding more complexity.
2. Pilot in isolation
I try new tools either on my own or with another team member on a test project. We look at the usability of the tool, the points of integration, and whether it actually improves the workflow. We don’t roll it out across the team until we see clear and measurable value.
3. It should integrate with the stack without any problems
I’m big on interoperability. Any tool we choose to use must integrate well with what we already have—be it GitHub, CI/CD, logging systems, or custom backend services. If it creates friction or redundancy, then it’s a no-go.
4. Keep documentation and training lightweight
For any new tool, I develop quick-start guides or internal documentation, often with real examples from our stack. The goal is to make onboarding as easy as possible and get the learning curve as close to zero as possible.
5. Reevaluate periodically
We don’t let tools become sacred. Every few months, we check: Is this tool still pulling its weight? Is there something better out there? If the answer is no, we sunset it. Lean tooling is key to keeping the system clean and agile.
For example, when we introduced our internal job queue system, we first tried to use off-the-shelf tools. Some of them worked well, but some of them added too much complexity. So we decided to build a lightweight in-house tool for our ops needs, and it ended up being faster, cleaner, and much easier to scale.
In short, tooling is seen as an extension of team culture. The right tools give people leverage, but only if you roll them out with clarity, purpose, and a deep respect for your team’s focus.
Managing a remote team can be challenging. What’s the most effective technique you’ve developed for fostering collaboration and maintaining productivity in a distributed work environment?
Managing a remote team presents numerous obstacles, yet numerous prospects exist when you develop intentional operational approaches. I find that building a culture based on asynchronous clarity together with structured autonomy produces the best results. The key approach I use to achieve success involves providing members with independent tools and information while ensuring everyone understands priorities and progress. Here’s how we do it at UniteSync:
1. Clear written communication is non-negotiable
We write down all information, including technical specifications and weekly objectives, together with decision records. All team members access required information without delay and without needing responses from others. The method decreases communication obstacles while increasing the period of focused work.
2. Daily async updates, weekly sync check-ins
Each team member maintains a short daily report that includes their work activities, upcoming tasks, and any blocking issues. We conduct weekly video calls to synchronize our roadmaps while discussing open questions and maintaining human connections. The method keeps us productive without needing to fill our schedules with unnecessary meetings.
3. Ownership over micromanagement
The team receives specific instructions regarding tasks but retains the freedom to choose their approach. People take full responsibility for their end-to-end projects, which creates trust that drives motivation. As CTO, my responsibility involves eliminating barriers, yet I should not spend time watching over employee tasks.
4. Simple dashboards provide shared visibility to all team members
The company developed internal dashboards that monitor system health, display project statuses, and track code deployments. The transparent system eliminates numerous status update meetings while providing instant visibility to all team members.
5. Don’t forget the human layer
We schedule time for casual conversations while we celebrate small achievements and promote breaks to combat remote work isolation. The small gestures of Slack memes alongside demo days and recognition of outstanding work efforts create significant positive impacts.
The essential factor for success lies in developing organizational systems that allow workflow rather than restricting it. Your team will remain productive regardless of their location when they understand the mission and receive both autonomy and support.
You’ve worked on automating royalty matching across 117+ countries. What’s the biggest technical hurdle you’ve encountered in this process, and how did you overcome it?
The technical process of automating royalty matching across 117+ countries stands as one of my most demanding accomplishments, and data inconsistency between collection societies and DSPs proved to be the major challenge. Every territory maintains its own set of standards, which range from the complete absence of standards to various formats that differ significantly from one another and produce highly inconsistent metadata quality.
You will receive reports that come in three different formats: one uses clean CWR format, while the second uses Excel with missing writer IDs, and the third contains abbreviated names in inconsistent ways. The process of matching large volumes of data while maintaining high accuracy proves to be extremely complicated. The solution required a multi-layered strategy:
1. Custom parsing and normalization engine
Our organization developed an adaptable parsing system which accepts and standardizes multiple file types including CSV, XML, CWR, and XLSX before converting everything into a unified internal schema. This was foundational. The comparison between apples and oranges becomes impossible when you lack this foundation.
2. AI-assisted fuzzy matching
Our AI system employs fuzzy logic together with NLP and contextual rules to find work matches despite their variations. The system analyzes title similarity alongside writer aliases and duration and performs ISWC/ISRC cross-checks while tracking publisher fingerprints. The fuzzy matching system enables the identification of matches that strict logical systems would miss.
3. Confidence scoring + human-in-the-loop review
Our system evaluates potential matches based on confidence scores instead of making automatic binary decisions. Any record with a confidence score below the set threshold requires human evaluation. The combination of automated assessment and human verification prevents system delays while maintaining accurate results.
4. Territory-specific logic
The system contains territory-specific rules and overrides to handle society’s behavior differences. The system handles fields that societies omit along with their local identification systems. The system learns how different regions operate through our matching logic, which enables it to understand their communication methods.
The outcome?
80–90% of matches are now automated.
New clients can join our platform at a significantly reduced rate.
Our system retrieves royalties from situations where clients were unaware of existing
Looking back at your transition from developer to CTO, what’s one piece of advice you’d give to aspiring tech leaders about balancing technical skills with leadership responsibilities?
The most important advice I would offer developers moving into tech leadership positions is:
Your value extends beyond coding, but you should continue to code. The role of a CTO demands you to establish systems and teams that tackle problems at scale instead of directly solving technical issues. Your deep understanding of the codebase and architecture remains essential, but your true organizational impact emerges through decisions about prioritization, communication, and cultural development. Here’s what helped me make that transition:
Trust your team—and empower them
The early stage required me to fight against the need to resolve all issues personally. I dedicated my efforts to teaching and establishing benchmarks which would help others achieve success. Leadership exists to create output growth rather than to optimize personal work.
Translate between tech and business
You become the bridge between engineering and the rest of the company. Your ability to translate technical trade-offs into business terms and vice versa will make you more valuable to the organization.
Protect focus, but embrace context-switching
Deep work maintains its sacred status for developers. Your role as CTO requires both deep work and the ability to move between code development, product management, security, team expansion, and strategic planning. Your weekly schedule needs to include time for both deep work and other activities.
Lead with clarity, not control
The team requires precise direction together with trust and accountability rather than excessive supervision. Better communication skills will reduce your need to perform constant check-ins.
Your technical skills should remain strong, but you must understand that your true power as a tech leader emerges through establishing clarity and systems and developing a team that functions independently of your coding presence. You will understand that you have transitioned when this happens.
Thanks for sharing your knowledge and expertise. Is there anything else you’d like to add?
Thank you, I really appreciate the opportunity to share my experience. If there’s one thing I’d emphasize, it’s that tech leadership is ultimately about people. The code, the architecture, the tools—they’re all important, but it’s the team, the culture, and the clarity of vision that determine whether a product truly scales and delivers value.
At UniteSync, we’ve built a culture around smart systems, trust, and constant iteration—and that’s what allows us to keep growing without losing quality. If you’re building in SaaS, especially around data or creator tech, I’d love to connect and share ideas.