println! on a GPU? Sounds ridiculous.

VectorWare just did something wild: they got Rust’s standard library running on GPUs. You can now println!, read and write files, and get system time inside an NVIDIA GPU kernel. GPU programming used to feel like surviving in a primitive tribe. Now it has electricity.

What happened?

Rust’s standard library has three layers: core needs no heap, alloc adds heap allocation, and std adds OS-related functionality. A GPU is just a chip designed for number crunching. It doesn’t understand file systems or networking. So GPU code used to be stuck with #![no_std], living in just the core and alloc layers.

VectorWare came up with something called hostcall.

Think of it as a delivery service between GPU and CPU. When the GPU wants to open a file but can’t do it itself, it sends a request to the CPU: “Hey, open this path for me.” The CPU uses the OS API to handle it and sends back the result.

Some things the GPU can handle on its own, no need to bother the CPU. For example, std::time::Instant can use the GPU’s hardware timer directly. But std::time::SystemTime needs to know real-world time, which the GPU doesn’t have, so it asks the CPU.

They’re even exploring wilder ideas: writing files to /gpu/tmp to keep them on the GPU without syncing to CPU, or using network requests to localhost:42 for inter-thread communication on the GPU.

Here’s what the code looks like:

use std::io::Write;
use std::time::{SystemTime, UNIX_EPOCH};

#[unsafe(no_mangle)]
pub extern "gpu-kernel" fn kernel_main() {
    println!("Hello from VectorWare and the GPU!");

    let now = SystemTime::now();
    let duration_since_epoch = now.duration_since(UNIX_EPOCH).unwrap();
    println!("Current time: {}", duration_since_epoch.as_secs());

    std::fs::File::create("rust_from_gpu.txt")
        .unwrap()
        .write_all(b"This file was created from a GPU!")
        .unwrap();
}

Apart from that extern "gpu-kernel" marker, it looks almost identical to regular Rust code.

The implication: libraries on crates.io that use Rust’s standard library can theoretically run on GPUs now. No need to check if they’re #![no_std] compatible.

It’s still early days. Only Linux with NVIDIA GPU and CUDA is supported. Hostcall has communication overhead. Not all std features work yet. The VectorWare team includes Rust compiler contributors, and they plan to upstream these changes.

My first reaction: debugging finally doesn’t have to feel like stumbling in the dark. GPU code used to break and you’d just guess what went wrong. Now you can print things out and see. That alone makes this worth the effort.

The GitHub repo should be open-sourced soon. Keep an eye on it if you’re interested.