Deploy Alice ML to Europe — Concurrent Functional Programming from Saarbrücken in 2026
There is a peculiar problem with functional programming and concurrency: most functional languages handle one brilliantly and bolt on the other as an afterthought.
Haskell has async and STM as libraries. OCaml added Effects and Eio well after the language's core was established. Standard ML's concurrency story — CML, MLton's thread libraries — lives entirely outside the language specification.
Alice ML took a different path.
Built at the Universität des Saarlandes in Saarbrücken, Germany, Alice ML extends Standard ML with first-class futures, lazy evaluation, and dataflow-based concurrency as language-level constructs. Not libraries. Not runtime options. Syntax.
The result is a language where future is a keyword, lazy is a type constructor, and concurrent programs compose with the same type safety guarantees as sequential ones. And the research group that built it — including Andreas Rossberg, who later co-designed WebAssembly at Google — was doing it in Saarbrücken, one of Europe's most productive academic computing centres, at the same institution where Gert Smolka was developing Oz.
This guide shows how to deploy Alice ML backends to EU infrastructure with sota.io.
The Saarbrücken Research Context
Alice ML did not emerge in isolation. It was born from a collision of two research traditions at the same German university — the Standard ML world and the Oz/Mozart world — and the result reflected both.
The Standard ML connection. Standard ML was designed at Edinburgh (Robin Milner, Mads Tofte) and refined over decades into one of the most carefully specified languages in existence. The SML Definition (1990, revised 1997) is a formally grounded standard. Standard ML of New Jersey (SML/NJ) and Poly/ML demonstrate its industrial applicability. The SML type system — Hindley-Milner inference, parametric polymorphism, the module system with functors and signatures — remains a model of principled language design.
The Oz/Mozart connection. At the same time, Gert Smolka at Saarbrücken and his collaborators at UCLouvain and KTH were developing Oz and its implementation Mozart — a language built around dataflow concurrency, where variables are logic variables that synchronise automatically when bound. (See our post on Oz/Mozart.) The Alice ML group had direct access to this work.
Andreas Rossberg 🇩🇪 (Universität des Saarlandes → Google → MPI-SWS Kaiserslautern) — the primary designer of Alice ML's type system and runtime. Rossberg did his doctoral and postdoctoral work at Saarbrücken, where he led the development of Alice ML and the Seam virtual machine. After Alice ML, he moved to Google's V8 team, where he became one of the principal co-designers of WebAssembly — the low-level bytecode format that now runs in every browser. He later returned to European academia at MPI-SWS Kaiserslautern, continuing research in programming language theory. Rossberg's trajectory — from Saarbrücken to WebAssembly — is one of the cleaner examples of European PL research directly influencing global computing infrastructure.
Leif Kornstaedt — Alice ML's primary runtime engineer. Kornstaedt designed the Seam virtual machine, Alice ML's own implementation substrate that replaced the earlier Mozart-based backend. The Seam VM handles Alice ML's concurrent execution model, native threads, and pickling (serialisation of runtime values including closures).
Thorsten Brunklaus — contributed to Alice ML's implementation, particularly the Mozart/Oz interoperability layer during the language's early phases.
The Smolka group. Alice ML emerged from the research group surrounding Gert Smolka at Saarbrücken — the same group responsible for Oz and Mozart. The influence is explicit: Alice ML's concurrency model (futures, promises, dataflow synchronisation) is directly inspired by Oz's single-assignment variables and logic programming semantics, transplanted into the SML type system.
What Alice ML Adds to Standard ML
Alice ML is Standard ML with three additions that are not available in any SML standard: first-class futures, lazy evaluation as a type constructor, and distributed programming via pickling.
First-Class Futures
A future in Alice ML is a placeholder for a value that has not yet been computed. Futures are typed — they are not untyped promises or Option types — and they compose transparently:
(* Spawn a concurrent computation — returns a future immediately *)
val result : int future = spawn computeExpensiveResult ()
(* Use the future as if it were an int — synchronises automatically when needed *)
val doubled = 2 * result (* blocks here only if result not yet available *)
The spawn keyword launches a computation in a concurrent thread and returns a typed future. The future is transparent: you can pass it to functions, store it in data structures, and use it in expressions. Synchronisation happens automatically when the program actually needs the value — no explicit .await, no callback, no join.
This is not a library. The future type is part of the Alice ML type system. The compiler tracks which values may be futures and inserts synchronisation points accordingly.
(* Futures compose type-safely *)
fun fetchUser (id : int) : user future = spawn getUserFromDb id
fun fetchPosts (userId : int) : post list future = spawn getPostsByUser userId
val user : user future = fetchUser 42
val posts : post list future = fetchPosts 42
(* Both computations run concurrently; results are used when available *)
val page = { user = user, posts = posts }
The type system ensures you cannot accidentally use a future as if it were its resolved value without going through synchronisation — no null-equivalent runtime errors from unresolved futures.
Lazy Evaluation
Alice ML adds lazy as a type constructor, enabling demand-driven evaluation within an otherwise strict (eager) language:
(* Lazy values are only computed when first forced *)
val expensiveConfig : config lazy = lazy loadAndParseConfig ()
(* Force evaluation — computed once, cached thereafter *)
val cfg : config = Lazy.force expensiveConfig
The lazy type is first-class: you can have int lazy, string list lazy, (int -> int) lazy. Lazy values are thread-safe — if multiple threads force the same lazy value simultaneously, only one computation runs; the others wait and receive the cached result.
Crucially, lazy and future interact. A lazy value that is forced in a concurrent context behaves as a future during evaluation — the forcing thread proceeds, other threads that try to force the same value synchronise on the result automatically.
By-Need Futures
The combination of lazy and future gives Alice ML by-need futures — values computed lazily but in concurrent contexts:
(* byneed: evaluated concurrently, but only when first demanded *)
val results : int list = List.map (fn x => byneed (fn () => heavyCompute x)) input
byneed spawns a computation that runs in a separate thread, but does not start until the value is first accessed. This is speculative prefetching without explicit scheduling logic — the runtime handles demand-driven concurrency automatically.
Pickling — Portable Closures
Alice ML supports pickling: serialising arbitrary runtime values — including closures and futures — to bytes that can be transmitted across a network or stored to disk. This is not object serialisation (which only handles data). It is code serialisation:
(* Pickle a closure to a file *)
val f : int -> int = fn x => x * 2
Pickle.save ("transform.pkl", f)
(* Load it back — including the code, not just data *)
val g : int -> int = Pickle.load "transform.pkl"
val result = g 21 (* 42 *)
For distributed systems, pickling means you can send computation to data rather than always moving data to computation. A closure serialised on one node can be transmitted and executed on another — the receiver does not need source code, only the Alice ML runtime.
Building a Backend Service with Alice ML
Alice ML's HTTP story uses the Socket library for TCP and HttpServer for HTTP. Here is a minimal REST-style service:
Project structure:
my-alice-service/
├── server.aml # Main service
├── handlers.aml # Request handlers
├── Dockerfile
└── .gitignore
handlers.aml:
import structure Json from "x-alice:/lib/json/Json"
structure Handlers =
struct
(* Health endpoint *)
fun health () =
Json.object [
("status", Json.string "ok"),
("lang", Json.string "alice-ml"),
("version", Json.string "1.4")
]
(* Echo endpoint — concurrent processing *)
fun processRequest (body : string) : string future =
spawn (
let
val parsed = Json.fromString body
val message = Json.toString (Json.lookup parsed "message")
val upper = String.map Char.toUpper message
in
Json.toString (Json.object [
("echo", Json.string upper),
("processed", Json.bool true)
])
end
)
end
server.aml:
import structure Socket from "x-alice:/lib/system/Socket"
import structure Handlers from "handlers"
(* Concurrent request handler — each request in its own future *)
fun handleConnection sock =
let
val request = Socket.inputLine sock
val response = Handlers.processRequest request
(* Response is a future — synchronises when handler completes *)
val body = Future.await response
in
Socket.output (sock, "HTTP/1.1 200 OK\r\n")
Socket.output (sock, "Content-Type: application/json\r\n\r\n")
Socket.output (sock, body)
Socket.close sock
end
(* Accept loop — each connection spawned concurrently *)
fun acceptLoop server =
let
val client = Socket.accept server
val _ = spawn handleConnection client
in
acceptLoop server
end
val server = Socket.server { port = 8080 }
val _ = acceptLoop server
Each connection is handled in a concurrent future — spawn handleConnection client returns immediately, and the accept loop continues. No thread pool configuration. No async framework. Concurrency is a language primitive.
Dockerfile:
FROM debian:bookworm-slim AS base
# Alice ML via Mozart/Seam runtime
RUN apt-get update && apt-get install -y \
wget \
libgmp-dev \
&& rm -rf /var/lib/apt/lists/*
# Download Alice ML 1.4 distribution
RUN wget -q https://www.ps.uni-saarland.de/alice/download/alice-1.4-x86-linux.tar.gz \
&& tar -xzf alice-1.4-x86-linux.tar.gz -C /opt \
&& rm alice-1.4-x86-linux.tar.gz
ENV PATH=/opt/alice-1.4/bin:$PATH
ENV ALICE_HOME=/opt/alice-1.4
WORKDIR /app
COPY . .
# Compile to bytecode
RUN alicec server.aml -o server.alc
EXPOSE 8080
CMD ["alicerun", "server.alc"]
Deploy on sota.io:
# Install sota CLI
curl -fsSL https://sota.io/install.sh | sh
# Deploy
sota deploy --port 8080
# Environment variables
sota env set DATABASE_URL=postgresql://...
# Custom domain
sota domains add alice.yourdomain.eu
Connecting to PostgreSQL from Alice ML
Alice ML's Sql library provides parameterised queries with SML type safety:
import structure Sql from "x-alice:/lib/sql/Sql"
(* Connect — DATABASE_URL from environment *)
val db = Sql.connect (OS.getEnv "DATABASE_URL")
(* Parameterised query — concurrent fetch *)
fun getUserById (id : int) : user option future =
spawn
let
val stmt = Sql.prepare db "SELECT id, email, name FROM users WHERE id = $1"
val result = Sql.execute stmt [Sql.int id]
in
case Sql.fetchOne result of
NONE => NONE
| SOME row =>
SOME {
id = Sql.getInt (row, 0),
email = Sql.getString (row, 1),
name = Sql.getString (row, 2)
}
end
(* Concurrent batch fetch — all queries run in parallel *)
fun getUsersBatch (ids : int list) : user option list future =
let
val futures = List.map getUserById ids
in
spawn (List.map Future.await futures)
end
getUsersBatch fires all database queries concurrently and collects results. The parallelism is automatic — each getUserById returns a future, and List.map Future.await synchronises on all of them only when the batch result is needed.
Why EU Teams Use Alice ML
Type-safe concurrency without a framework. Alice ML's futures are typed — int future is distinct from int, and the compiler enforces the distinction. This eliminates a class of concurrency bugs (using a future as if it were resolved, losing track of which values are concurrent) that are routine in Promise-based JavaScript or Go's implicit goroutine returns. For EU fintech or public sector services where concurrency correctness matters, this is a real property.
Standard ML's module system. Alice ML inherits SML's module system — functors, signatures, and structures — which remains the most powerful module system in any mainstream language. Large EU codebase organisation (separating interface from implementation, explicit dependency boundaries, parameterised modules) is handled cleanly without build-system workarounds.
Oz-heritage dataflow semantics. Alice ML's concurrency model is descended from Oz's single-assignment variables and dataflow synchronisation (see the Oz/Mozart post). Where Oz uses the constraint store, Alice ML uses futures and lazy values — a typed, functional version of the same idea. Teams familiar with Oz will find Alice ML's concurrency model immediately recognisable.
Standard ML cross-compatibility. Alice ML is a strict superset of Standard ML — valid SML code is valid Alice ML. This means any SML library or codebase can be used directly. The Standard ML ecosystem (MLton, Poly/ML, SML/NJ) and Alice ML share code: functors, signatures, and structures from any SML library work in Alice ML without modification.
Saarbrücken research provenance. Alice ML comes from Universität des Saarlandes, home of DFKI (Deutsches Forschungszentrum für Künstliche Intelligenz) and one of Europe's densest concentrations of programming language, AI, and formal methods research. The same institution produced Gert Smolka's Oz, Andreas Rossberg's WebAssembly co-design, and a series of foundational results in type theory and program verification. For EU organisations evaluating academic-origin languages, Saarbrücken provenance is a credible signal.
Pickling for distributed EU deployments. Alice ML's serialisation of closures and futures enables computation migration patterns relevant to EU data-residency requirements: you can ship computation to EU-resident data rather than the reverse. For EU healthcare or financial workloads with strict data-locality rules, this architectural option is meaningful.
Practical Considerations
Research-origin maturity. Alice ML is a research language — the last stable release is 1.4. The codebase is well-implemented and the Seam VM is production-capable, but ecosystem tooling (package management, IDE integration, community support) reflects research-lab norms rather than industrially maintained projects. Evaluate your operational requirements before deploying in high-traffic contexts.
JVM alternative. Alice ML's Seam VM is a custom OCaml-derived runtime. It is not JVM-based. For teams that prefer JVM deployment (containerisation, monitoring, existing ops tooling), Scala, Clojure, or Kotlin offer functional programming with JVM familiarity.
WebAssembly connection. Andreas Rossberg's work on WebAssembly is in production in every browser. Alice ML's type-theoretic foundations directly informed Rossberg's WebAssembly type system design. Teams interested in the PL theory behind WebAssembly will find Alice ML an instructive study — the language demonstrates the Hindley-Milner foundation that Rossberg brought to the WebAssembly core.
When Alice ML is the right choice:
- Concurrent backend services where type safety of async values matters
- Distributed systems requiring computation migration (pickling)
- EU academic environments teaching concurrent functional programming
- Teams with SML familiarity who need Oz-style concurrency without leaving the ML type system
- Research prototypes in language-theoretic correctness or distributed systems
Deploy on EU Infrastructure
sota.io runs on servers in Germany (Frankfurt) and other EU data centres. All data stays within EU jurisdiction, GDPR compliance is structural, and every deployment is isolated by default.
Get started:
# Install sota CLI
curl -fsSL https://sota.io/install.sh | sh
# Log in
sota login
# Deploy from Dockerfile
sota deploy --port 8080
# Custom domain
sota domains add alice.yourdomain.eu
Type-safe. Concurrent by design. Hosted in Europe.
European Connections Summary
| Who | Institution | Contribution |
|---|---|---|
| Andreas Rossberg 🇩🇪 | Universität des Saarlandes → Google V8 → MPI-SWS Kaiserslautern | Alice ML designer, WebAssembly co-designer |
| Leif Kornstaedt | Universität des Saarlandes | Seam VM architect, Alice ML runtime |
| Thorsten Brunklaus | Universität des Saarlandes | Alice ML implementation, Mozart interop |
| Gert Smolka 🇩🇪 | Universität des Saarlandes | Oz/Mozart — Alice ML concurrency heritage |
| Universität des Saarlandes 🇩🇪 | Saarbrücken | Research home of Alice ML, Oz, DFKI vicinity |
| MPI-SWS 🇩🇪 | Kaiserslautern | Rossberg's current affiliation, PL theory research |
| European Research Council | EU | Research funding context for Saarbrücken PL group |
Alice ML is the product of Saarbrücken's collision of two traditions: Robin Milner's Standard ML type system and Gert Smolka's Oz concurrency model. The person who built that collision — Andreas Rossberg — went on to co-design WebAssembly. The foundation is EU-built, the type theory is Cambridge-Edinburgh-Saarbrücken, and the concurrency model is Saarbrücken-Brussels-Stockholm. All of it runs on sota.io in Frankfurt.
sota.io is EU-native infrastructure for backend services. Deploy your Alice ML application to German servers in minutes — GDPR-compliant, managed PostgreSQL, custom domains included.