A Question That Nags

You wrote some Kotlin code, hit compile, and it ran on an iPhone. Kotlin Multiplatform (KMP) made that happen.

The whole thing feels as smooth as tapping your phone to pay at a convenience store. One beep and you’re done. But have you ever wondered how many systems are working behind the scenes between your phone sending the payment request and the merchant actually getting the money?

Running KMP on iOS works the same way. On the surface it looks like “write Kotlin, run Swift,” but the Kotlin/Native compilation pipeline underneath is far more involved than you’d expect.

I recently dug into this process and found it touches the compiler frontend, the LLVM backend, an Objective-C bridging layer, and a whole chain of other moving parts. Today I’m going to take that pipeline apart and walk through it station by station, so you can see exactly what your Kotlin code goes through before it becomes executable machine instructions on an iPhone.

First Things First: iPhones Don’t Speak Kotlin

Running Kotlin on Android is natural. Kotlin compiles to JVM bytecode, and Android Runtime can execute it directly. It’s like speaking English to someone in London — they get it.

But iPhones? The iOS runtime only understands two languages: Objective-C and Swift. Hand a chunk of Kotlin code to Xcode and it won’t even glance at it. That’s like walking into a French restaurant with a Chinese recipe book — the chef will politely tell you he can’t read it.

So the core problem KMP needs to solve is: how do you translate Kotlin code into something iOS can understand? This translation isn’t a Google Translate situation. It’s a multi-stage industrial compilation pipeline.

Stop 1: Kotlin Compiler Frontend — Making Sense of Your Code

Your .kt files start as plain text. The compiler frontend’s job is to turn that text into a structured Intermediate Representation (IR).

What happens here? Syntax parsing, type checking, making sure you didn’t try to use a String as an Int. If you made a mistake, the compiler catches it at this stage and stops you.

Think of it like taking a number at the bank. The ticket machine doesn’t care whether you’re depositing cash or applying for a loan — it just confirms you’re a legitimate customer and gives you a queue number. What counter you end up at and what business you conduct there is a downstream concern. The IR that the compiler frontend outputs is that queue number: platform-agnostic, ready for anyone to process next.

Starting with Kotlin 2.0, the K2 compiler frontend went fully online and nearly doubled compilation speed. This upgrade is particularly noticeable in KMP projects, where cross-platform codebases tend to be sizable.

Stop 2: Kotlin/Native Backend — Tailored for iOS

This is where things fork.

If the target is Android, the IR goes through the JVM backend to produce bytecode. But if the target is iOS, it goes through the Kotlin/Native backend. This backend handles three key tasks:

First, it translates Kotlin IR into LLVM IR. What’s LLVM? Think of it as a universal “machine code translator.” The Swift compiler uses it, Clang (the C/C++ compiler) uses it, and now Kotlin/Native uses it too. In 2025, Google upgraded Kotlin/Native’s LLVM version to LLVM 16, improving both compilation efficiency and optimization capabilities.

Next, it generates Objective-C-compatible entry points. In the iOS world, Swift and Objective-C are first-class citizens. For Kotlin to survive there, it needs to “disguise” itself as something Objective-C can understand. The compiler generates corresponding Objective-C header files and symbol mappings for your Kotlin classes and methods.

The last piece is packaging interop metadata that tells downstream tooling: what’s this method called, what are the parameter types, how do you invoke it.

You can think of the Kotlin/Native backend as pre-departure prep for going abroad. Your Kotlin code is heading to iOS “country” for work, so it needs its “resume” translated into the local language (Objective-C headers), its “work visa” sorted out (LLVM IR), and a “local contacts” document prepared (interop metadata).

Stop 3: LLVM Backend — The Heavy Machinery

Once LLVM has the IR, it starts doing what compiler backends do best: optimization and code generation.

This step includes dead code elimination (removing code you wrote that will never actually execute), loop optimization, register allocation, and instruction selection. LLVM generates highly optimized machine code targeting the ARM64 architecture — the CPU architecture used by iPhones and iPads. With Kotlin 2.1, the introduction of LLVM 16 reduced release binary sizes by roughly 5-15% and brought minor runtime performance gains.

It’s like a car factory assembly line. Raw materials (LLVM IR) come in, go through stamping, welding, painting, and quality inspection, and out come finished components (.o object files). These components can’t drive on a road yet, but they’re genuine machine code.

Stop 4: Apple Linker — Assembling Components into a Vehicle

Object files (.o files) contain compiled machine code, but the references between them are still unresolved. For example, if your code calls NSString from the Foundation framework, that reference in the object file is just an unfilled placeholder symbol.

The Apple linker’s job is to stitch all the object files together and fill in those placeholders. It links system frameworks (Foundation, libobjc, etc.), verifies architecture compatibility, and handles symbol conflicts.

If you’ve ever seen an error like Undefined symbols for architecture arm64, that’s the linker reporting a problem at this stage. It usually means you’re missing a linked library or have an architecture mismatch.

The final output is a .framework (typically packaged as an .xcframework for multi-architecture support). This framework is a standard iOS format that Xcode can import directly, containing compiled binaries and Objective-C headers.

Stop 5: Swift Compiler Reads It — Disguise Successful

At this point, your Kotlin code has put on an Objective-C “suit” and become a standard iOS framework.

The Swift compiler uses its built-in Clang Importer to read the Objective-C headers from the framework. The Clang Importer parses those headers into type declarations that Swift can understand. From Swift’s perspective, this framework looks no different from system frameworks like UIKit or Foundation.

The clever part: Swift has no idea it’s calling Kotlin code. It thinks it’s calling a regular Objective-C framework. It’s like ordering delivery and the packaging says “handmade in-house,” but there’s actually an automated production line in the back kitchen. Whether the food tastes good is one thing, but as a consumer you can’t tell the difference in how it was made.

On this front, JetBrains introduced an experimental feature called Swift Export in 2025. In Kotlin 2.2.20 (released September 2025), Swift Export is enabled by default. It lets Kotlin code generate pure Swift interfaces directly, skipping the Objective-C middle layer. This means the APIs Swift developers see look more “native,” with naming conventions that feel natural in Swift.

Runtime: The Last Mile from Swift to Kotlin

The compilation phase ends above. What follows happens at runtime.

When Swift code hits a line that calls a Kotlin method, the call is dispatched through the Objective-C ABI (Application Binary Interface). In plain terms, it uses Objective-C’s message-sending mechanism to route the call to the machine code generated by Kotlin/Native.

The call chain looks like this:

Swift code -> Objective-C ABI (message dispatch) -> Kotlin/Native machine code

Data types get converted along the way. Kotlin’s String bridges to Objective-C’s NSString, List bridges to NSArray. Kotlin/Native has a built-in type mapping table to handle these conversions.

Kotlin/Native’s memory management is worth mentioning too. It has its own garbage collector (GC), similar to the JVM’s GC, but also integrated with Swift/Objective-C’s ARC (Automatic Reference Counting). Cross-language object references won’t cause memory leaks, though in practice you should keep an eye on performance when frequently passing large objects (like UIImage) across the boundary.

The Full Pipeline in One Pass

Stringing everything together, the complete flow goes like this:

  1. You write a SharedViewModel.kt
  2. K2 compiler frontend reads the code, checks types, outputs Kotlin IR
  3. Kotlin/Native backend translates IR to LLVM IR and generates Objective-C headers
  4. LLVM backend optimizes and generates ARM64 object files
  5. Apple linker links object files with system libraries into a .framework
  6. Xcode imports the framework; Swift’s Clang Importer reads the Objective-C headers
  7. At runtime, Swift calls dispatch through Objective-C ABI to Kotlin/Native machine code

Seven steps. About the same level of complexity as going from a coffee bean to the latte in your hand.

Why Any of This Matters

You might ask: I’m building with KMP, I just write code — why should I care about the internals?

Here are a few scenarios you’ll run into sooner or later.

When debugging iOS-only crashes. Some crashes only happen on iOS while Android works perfectly fine. If you know the call chain is Swift -> ObjC ABI -> Kotlin/Native, you know where to start looking — it’s likely a type bridging or threading model issue.

When designing cross-platform APIs. The interfaces you define in Kotlin will ultimately be translated into Objective-C classes and methods. If you use Kotlin’s sealed classes, inline classes, or suspend functions from coroutines, the translation to the Objective-C side can get awkward. Understanding this translation process helps you design APIs that are more iOS-friendly.

There’s also the question of build speed. KMP iOS builds are typically slower than Android. Where’s the bottleneck? The Kotlin/Native to LLVM IR translation stage, the LLVM optimization stage, or the linking stage? Knowing the pipeline structure lets you target the right spot. Google contributed the LLVM 16 upgrade and a more efficient GC implementation in 2025, optimizing specific segments of this pipeline.

Who’s Running KMP in Production

If you’re still on the fence about whether KMP is mature enough, consider these names: Google Docs on iOS runs KMP in production, Duolingo ships to over 40 million users weekly with KMP code included, and AWS uses KMP in their SDKs for cross-platform sharing.

These aren’t hobby experiments. They’re production systems where outages trigger alerts.

Wrapping Up

KMP on iOS feels like “magic” because the pipeline buries the complexity deep. But anyone who ships code knows there’s no magic in production — only systems you either understand or don’t.

This pipeline runs from the compiler frontend through Kotlin/Native backend, LLVM, the linker, Clang Importer, and Objective-C ABI. Every station can break, and every station can be optimized.

Once you’ve mapped out these stages, the next time you hit a KMP issue on iOS, you’ll at least know which layer to check the logs in.


FAQ

Is there a performance gap between KMP-compiled iOS output and native Swift?

Kotlin/Native generates ARM64 machine code through LLVM that runs directly on the CPU — the same tier as what Swift compiles down to. The real performance differences show up in data conversions at the cross-language boundary, like frequently converting between Kotlin’s String and Objective-C’s NSString. For everyday business logic computation, there’s no perceptible difference.

Why doesn’t KMP generate Swift code directly instead of going through Objective-C?

Historical reasons. In the iOS ecosystem, the Objective-C runtime is the foundational infrastructure underneath everything — even Swift calls system frameworks through the Objective-C ABI. Going through Objective-C is the most compatible path. The good news is that JetBrains launched Swift Export starting with Kotlin 2.2.20, which is progressively adding support for generating Swift interfaces directly and reducing the naming style mismatches that come from the Objective-C middle layer.

KMP iOS builds are much slower than Android builds. Can they be improved?

Several angles: enable Gradle build caching and incremental compilation; use linkDebugFrameworkIos for development builds (faster than release mode); minimize code in iosMain and keep pure logic in commonMain. Google contributed the LLVM 16 upgrade and a more efficient GC in 2025, and iOS build speeds on Kotlin 2.1+ are noticeably better than before.