Hands-on AI engineering leadership.

How I lead AI engineering teams

I lead AI engineering teams by reducing ambiguity without reducing ambition. In fast-moving AI work, teams need room to experiment, but they also need clear engineering standards, ownership, evaluation discipline, and honest conversations about risk.

Hands-on enough to understand the trade-offs.

My leadership style is hands-on enough to understand the technical trade-offs, but structured enough to help others own the system.

I connect this style to the AI / People / Data / Code framework: AI decisions need people ownership, data discipline, and code that can be operated.

Production over prototypes

Prototype learning matters, but production ownership decides whether AI becomes useful.

Governance by design

Security, auditability, and policy are architecture concerns, not launch paperwork.

Developer experience matters

Teams adopt standards faster when the safe path is also the clear path.

Simple systems win

Simple boundaries, clear ownership, and observable behavior beat clever unmanaged complexity.

People scale platforms

Mentoring and shared standards turn platform work into organization-level leverage.

Learning speed is a team capability

Evaluation, feedback, and retrospectives help teams improve without chasing every trend.

Leadership in production AI work.

Hands-on, but not a bottleneck

I stay close enough to architecture, code, and delivery risks to make useful decisions, while helping engineers own more of the system.

High standards, open challenge

Strong engineering standards work best when the team can challenge assumptions and improve the design without theatre.

Psychological safety with technical rigor

Teams need safety to surface risk early, and rigor to turn that honesty into better architecture and delivery.

Turning ambiguity into delivery

I help teams convert broad AI ambition into decisions about scope, ownership, evaluation, architecture, and release paths.

Mentoring engineers into AI-native workflows

Mentoring means helping engineers think beyond prompts into retrieval, evaluation, governance, observability, and product adoption.

Balancing innovation, governance, and risk

Useful AI delivery needs experimentation, but it also needs permission boundaries, evidence, and controlled change.

How I work with stakeholders

I translate technical trade-offs into business, product, compliance, and engineering decisions so stakeholders can choose consciously.

How I create leverage

  • Unclear AI ideas become explicit engineering decisions.
  • Experiments, products, and platforms are separated early.
  • Engineers learn to think in systems, not only models.
  • Technical delivery stays connected to product, compliance, and business constraints.
  • Reusable patterns beat one-off heroics.

What I do not optimise for

Demo theatre

A prototype is useful only if it teaches the team what production will require.

Model-first thinking

Most enterprise AI failures are system, data, ownership, or adoption failures.

Governance after launch

Security, auditability, and policy need to be part of the architecture from the beginning.

Hero engineering

Sustainable AI delivery needs platforms, standards, mentoring, and shared ownership.

Seniority beyond buzzwords

In practice this means naming trade-offs, saying no to vague automation, and helping the team learn from releases.

Hiring for senior AI engineering leadership in Zurich?

Useful fit when you need someone close to architecture and code, but also accountable for people leadership and regulated delivery.