Skip to content

Module system primitives

The Nix module system implements a sophisticated configuration composition mechanism that can be understood through its core algebraic primitives. This document provides three-tier explanations (intuitive, computational, and formal) for each primitive to illuminate both practical usage and mathematical structure.

Understanding these primitives is essential for working with Nix at scale because they explain why the module system supports complex patterns like conditional imports, priority overrides, and recursive submodules while maintaining predictable semantics. The algebraic foundations are not merely theoretical—they are the reason NixOS configurations compose reliably.

This document provides the foundational understanding of Nix module system primitives. After mastering these concepts, see:

Source reference: nixpkgs lib/types.nix:1138-1180

A deferred module is a module definition that hasn’t been evaluated yet because it needs to wait for the final configuration to be computed. Think of it as a recipe that says “once you know what the final meal looks like, I’ll tell you what ingredients I need.” This is the fundamental mechanism that allows modules to reference the final configuration value without creating infinite loops.

The most common form is a function taking { config, ... } arguments, where config refers to the fully-merged configuration after all modules have been combined. This enables conditional logic like “if some other option is enabled, then enable this feature too” without the evaluator getting stuck in circular dependencies.

When you write a module as a function:

{ config, pkgs, ... }: {
services.nginx.enable = config.services.web.enable;
}

This function IS called immediately during the collection phase. The module system:

  1. Collects all modules by calling their functions with args including a lazy config reference
  2. Normalizes returned attrsets to canonical form { _file, key, imports, options, config }
  3. Computes a fixpoint where the lazy config reference resolves to the merged result

The “deferral” is in the lazy evaluation of config values within the returned attrsets, not in suspending function calls.

The type implementation shows this clearly:

deferredModuleWith = { staticModules ? [] }: mkOptionType {
name = "deferredModule";
check = x: isAttrs x || isFunction x || path.check x;
merge = loc: defs: {
imports = staticModules ++ map (def:
lib.setDefaultModuleLocation "${def.file}, via option ${showOption loc}" def.value
) defs;
};
# ...
};

The merge function doesn’t evaluate the modules—it just collects them into an imports list. The actual evaluation happens later in evalModules via the fixpoint computation.

Type vs value: deferredModule is an option type (specifically types.deferredModuleWith), not a value. When you write flake.modules.homeManager.foo = { ... }: { ... }, you’re assigning a value of type deferredModule to an option. The type’s merge function defines how multiple such values compose.

What “deferred” means (and doesn’t mean)

Section titled “What “deferred” means (and doesn’t mean)”

The term “deferred” in module system context is frequently misunderstood:

NOT deferred (happens immediately during collection):

  • Module function calls — applyModuleArgs invokes functions eagerly
  • Import resolution — paths are imported and processed during collection
  • Structural normalization — modules converted to canonical form

IS deferred (resolved later):

  • Config value evaluation — thunks referencing config resolve on demand during fixpoint computation
  • deferredModule-typed option consumption — values stored in options like flake.modules.* are only evaluated when a consumer imports them into their own evalModules call

The circularity in { config, ... }: { foo = config.bar; } works NOT because the function call is deferred, but because config is a lazy reference to the fixpoint result. The function executes immediately and returns an attrset containing an unevaluated thunk (config.bar).

A deferred module is a morphism in a Kleisli category over a reader-like effect that provides access to the final configuration.

deferredModule:EnvModule\text{deferredModule} : \text{Env} \to \text{Module}

where Env\text{Env} is the environment containing the fixpoint configuration c:Configc : \text{Config}, and Module\text{Module} is a structure defining options and configuration values.

More precisely, modules form a category Mod\mathbf{Mod} where:

  • Objects are module interfaces (sets of option declarations)
  • Morphisms are module implementations (functions from configurations to definitions)

The deferred module type embeds the Kleisli category Kl(T)\mathbf{Kl}(T) for the reader monad T(A)=ConfigAT(A) = \text{Config} \to A into Mod\mathbf{Mod}.

A module expression of the form:

m:Config{options:Options,config:Definitions}m : \text{Config} \to \{ \text{options} : \text{Options}, \text{config} : \text{Definitions} \}

awaits the fixpoint-resolved configuration as its function argument:

configfinal=μc.imi(c)\text{config}_{\text{final}} = \mu c.\, \bigsqcup_{i} m_i(c)

where μ\mu denotes the least fixpoint and \sqcup is the join operation in the configuration lattice.

The module system orchestrates two complementary algebraic structures:

  1. Type-level monoid (module collection): deferredModule values form a monoid under concatenation of imports lists—identity is the empty list [], operation is list concatenation ++, and composition happens before fixpoint computation
  2. Semantic-level join-semilattice (configuration merging): merged configuration values form a join-semilattice with type-specific merge operations, computed after the fixpoint resolves cross-module references

The transition from monoid (module collection) to semilattice (configuration merging) happens via evalModules fixpoint computation. Deferred modules enable a traced monoidal category structure where the trace operation implements the fixpoint that ties the configuration back to itself.

The Kleisli category characterization directly corresponds to everyday Nix module syntax for the reader-like operations:

Reader OperationModule System PrimitiveNix Manifestation
askConfig access{ config, ... }: config.foo.bar
fmapOption transformationDefining values in terms of other options

The reader monad captures deferred access to configuration: modules are functions awaiting the final config value. However, the fixpoint computation that provides that config value requires a different categorical framework.

Fixpoint as trace: The equation config = F(config) is a trace operation in a traced monoidal category, not a Kleisli operation. In categorical terms, trace “ties the knot” on an internal state:

TrC:(ACBC)(AB)\text{Tr}_C : (A \otimes C \to B \otimes C) \to (A \to B)

For modules: AA is external input (pkgs, lib), BB is output configuration, and CC is the configuration being computed. The trace feeds CC back into itself, implementing the recursive config = F(config) binding.

These two frameworks work together:

  • Reader monad (Kleisli): explains why modules can reference config without explicit threading
  • Traced monoidal category: explains why the self-referential fixpoint is well-defined

Concretely:

{ config, lib, ... }: {
options.paths.base = lib.mkOption { type = lib.types.str; };
options.paths.processed = lib.mkOption { type = lib.types.str; };
config.paths.processed = "${config.paths.base}/processed";
}

This seemingly circular reference works because it combines both structures: the module is a reader computation (accessing config), and evalModules applies the trace operation to resolve the fixpoint. The module doesn’t immediately evaluate config.paths.base—it constructs a function from config to definitions. When evalModules computes the fixpoint via demand-driven lazy evaluation, it ties the knot: the final config becomes the argument to all module functions, resolving config.paths.processed without explicit threading.

Note that while this demonstrates the fixpoint mechanism, paths.base must still be defined by another module or have a default value. The module system resolves the reference to the final merged value, but doesn’t create values from nothing. If paths.base is undefined, evaluation will fail with “The option `paths.base’ was accessed but has no value defined. Try setting the option.” This is not a limitation of the fixpoint—it’s the correct behavior: the module declares it needs a base path but doesn’t provide one.

Source reference: nixpkgs lib/modules.nix:84-367

evalModules is the function that takes a list of modules and produces a final configuration by:

  1. Recursively discovering all imported modules
  2. Collecting all option declarations
  3. Collecting all configuration definitions
  4. Computing a fixpoint where configuration values can reference the final merged result
  5. Merging all definitions according to option types and priorities
  6. Checking that all definitions match declared options

It’s the “main” function of the module system—the evaluator that turns a collection of module fragments into a coherent configuration.

The fixpoint computation is what enables powerful patterns like “enable this service if that service is enabled” without falling into infinite recursion, because all references to config are accessing the same final value.

The evaluation proceeds in phases:

Phase 1: Collection (collectModules)

collectModules class modulesPath (regularModules ++ [internalModule]) args

Recursively expands all imports, filters disabledModules, and produces a flat list of normalized modules.

Phase 2: Merging (mergeModules)

merged = mergeModules prefix (reverseList (doCollect {}).modules);

Traverses the option tree, matching definitions to declarations and recursing into submodules.

Phase 3: Fixpoint resolution

config = let
declaredConfig = mapAttrsRecursiveCond (v: !isOption v) (_: v: v.value) options;
freeformConfig = /* handle unmatched definitions */;
in if declaredConfig._module.freeformType == null
then declaredConfig
else recursiveUpdate freeformConfig declaredConfig;

The fixpoint is implicit in Nix’s lazy evaluation: when a module function is called with config, it receives a thunk that will eventually evaluate to the merged result—which may include values produced by that same function. Nix’s laziness ensures this only works if there are no strict cycles (e.g., a = a + 1 fails, but a = if b then x else y; b = someCondition works).

The key implementation detail:

config = addErrorContext
"if you get an infinite recursion here, you probably reference `config` in `imports`..."
config;

This shows config is a self-referential binding that relies on lazy evaluation to resolve.

evalModules computes the least fixpoint of a module configuration functor in a domain-theoretic framework via demand-driven lazy evaluation, not classical Kleene iteration.

Let M\mathcal{M} be the set of all modules, and define the configuration space C\mathcal{C} as a complete lattice of partial configurations ordered by information content (the Smyth order: c1c2c_1 \sqsubseteq c_2 iff c2c_2 extends c1c_1).

Each module mim_i defines a function:

Fmi:CCF_{m_i} : \mathcal{C} \to \mathcal{C}

that takes a configuration and produces additional definitions. The combined module system defines:

F:CC,F(c)=i=1nFmi(c)F : \mathcal{C} \to \mathcal{C}, \quad F(c) = \bigsqcup_{i=1}^{n} F_{m_i}(c)

where \sqcup is the join operation in the configuration lattice (merging definitions according to type-specific merge functions and priority ordering).

By the Knaster-Tarski theorem, since FF is monotone on the complete lattice C\mathcal{C}, it has a unique least fixpoint:

evalModules(m1,,mn)=μF=lfp(F)\text{evalModules}(m_1, \ldots, m_n) = \mu F = \text{lfp}(F)

The classical Kleene characterization lfp(F)=k0Fk()\text{lfp}(F) = \bigsqcup_{k \geq 0} F^k(\bot) describes the mathematical object (where \bot is the minimal configuration and FkF^k denotes kk applications of FF), but Nix does not compute this series directly—it uses demand-driven thunk evaluation instead, computing only the portions of the fixpoint actually demanded.

Lattice structure: The configuration lattice is product of per-option lattices:

C=oOptionsLo\mathcal{C} = \prod_{o \in \text{Options}} \mathcal{L}_o

where each option’s lattice Lo\mathcal{L}_o is determined by:

  • Primitive types (int, string): flat lattice (only \bot and incomparable values)
  • mkMerge: forms join of sublattices
  • mkOverride: imposes priority ordering (values with priority pp dominate those with priority p>pp' > p)
  • submodule: recursive fixpoint on nested configuration lattice

Convergence: Nix reaches the fixpoint without exhaustive iteration because:

  1. Lazy evaluation computes only demanded portions of the configuration space
  2. Each demand resolves more thunks, monotonically increasing defined values
  3. The demanded slice has finite height (no infinite ascending chains in practice)
  4. Stabilization happens per-demanded-value, not globally across the entire lattice

Unlike classical Kleene iteration (which computes F0(),F1(),F2(),F^0(\bot), F^1(\bot), F^2(\bot), \ldots until global stabilization), Nix evaluates thunks on demand. The mathematical result is identical (the least fixpoint), but the computational path is fundamentally different.

Category theory perspective: The fixpoint computation implements a trace operation in a traced monoidal category. The module system forms a compact closed category where:

  • Objects are option sets (interfaces)
  • Morphisms are modules (implementations)
  • The trace of a morphism f:ACBCf : A \otimes C \to B \otimes C is TrC(f):AB\text{Tr}_C(f) : A \to B, which “ties the knot” on the internal state CC (the configuration being computed)

The equation config = F(config) is precisely the trace operation that connects the output configuration back to the input, relying on domain-theoretic fixpoints for well-definedness.

Source references:

Option merging determines what happens when multiple modules define the same option. The module system provides several primitives to control this:

mkMerge: Explicitly combine multiple values. Without this, writing foo = [a]; foo = [b]; in the same module would be an error (duplicate attribute). mkMerge [a b] says “I’m intentionally providing multiple values to be merged.”

mkOverride (and its aliases mkDefault, mkForce): Attach a priority to a value. Lower numeric priorities win. This enables the pattern where modules can set sensible defaults (mkDefault) that users can override without conflicts.

mkIf: Conditionally include a value. Unlike a plain Nix if expression, mkIf conditions can reference the final configuration, and mkIf false values disappear entirely (don’t contribute to merging).

mkOrder (and mkBefore/mkAfter): Control the order of list elements when merging. Useful for ensuring certain items appear first or last in merged lists.

The merging process (mergeDefinitions) operates in stages:

Stage 1: Discharge properties

defsNormalized = concatMap (m:
map (value: if value._type or null == "definition" then value else { inherit (m) file; inherit value; })
(dischargeProperties m.value)
) defs;

This expands mkMerge (flattens nested merges) and evaluates mkIf conditions:

  • mkMerge [a b c] becomes [a, b, c]
  • mkIf true x becomes [x]
  • mkIf false x becomes []

Stage 2: Filter by priority

defsFiltered = filterOverrides' defsNormalized;

Examines all mkOverride priorities and keeps only definitions with the highest priority (lowest number):

getPrio = def: if def.value._type or "" == "override"
then def.value.priority
else defaultOverridePriority; # 100
highestPrio = foldl' (prio: def: min (getPrio def) prio) 9999 defs;
values = filter (def: getPrio def == highestPrio) defs;

Priority values (lower number = higher priority, “wins” in merge):

  • mkForce: 50 (force override)
  • No modifier: 100 (user value)
  • mkDefault: 1000 (module default)
  • mkOptionDefault: 1500 (option’s own default)

Stage 3: Sort by order

defsSorted = if any (def: def.value._type or "" == "order") defsFiltered.values
then sortProperties defsFiltered.values
else defsFiltered.values;

For list-valued options, mkBefore (priority 500) items appear before default (1000) before mkAfter (1500).

Stage 4: Type-specific merge

mergedValue = if type.merge ? v2
then checkedAndMerged.value # new v2 protocol
else type.merge loc defsFinal; # classic merge

Each type defines its own merge function:

  • Lists: concatenate
  • Attribute sets: recursive merge
  • Integers: must all be equal (or use mergeEqualOption)
  • Submodules: recursive evalModules

Formal characterization (semantic-level join-semilattice)

Section titled “Formal characterization (semantic-level join-semilattice)”

Option merging forms a join-semilattice with priority stratification—this is the semantic-level algebraic structure that operates after fixpoint computation resolves cross-module references.

Join-semilattice structure: For each option of type τ\tau, the set of possible merged values MτM_\tau forms a join-semilattice (Lτ,)(\mathcal{L}_\tau, \sqcup) where:

merge([d1,d2,,dn])=d1d2dn\text{merge}([d_1, d_2, \ldots, d_n]) = d_1 \sqcup d_2 \sqcup \cdots \sqcup d_n

The join operation \sqcup is type-dependent:

Lists: LlistOfα=List(α)\mathcal{L}_{\text{listOf}\, \alpha} = \text{List}(\alpha) with join:

xsys=xs+ ⁣ ⁣+ysxs \sqcup ys = xs \mathbin{+\!\!+} ys

(list concatenation, associative with identity [][]).

Note: List concatenation is technically a monoid operation, not a true join-semilattice operation (it is not commutative or idempotent). The \sqcup notation here emphasizes the merge semantics rather than strict lattice-theoretic structure.

Attribute sets: LattrsOfα=NameLα\mathcal{L}_{\text{attrsOf}\, \alpha} = \text{Name} \to \mathcal{L}_\alpha with pointwise join:

(fg)(n)={f(n)g(n)if ndom(f)dom(g)f(n)if ndom(f)dom(g)g(n)if ndom(g)dom(f)(f \sqcup g)(n) = \begin{cases} f(n) \sqcup g(n) & \text{if } n \in \text{dom}(f) \cap \text{dom}(g) \\ f(n) & \text{if } n \in \text{dom}(f) \setminus \text{dom}(g) \\ g(n) & \text{if } n \in \text{dom}(g) \setminus \text{dom}(f) \end{cases}

Submodules: LsubmoduleM=μC.Eval(M,C)\mathcal{L}_{\text{submodule}\, M} = \mu C.\, \text{Eval}(M, C) (recursive fixpoint).

Priority stratification: Definitions carry a priority pNp \in \mathbb{N} (lower is higher priority). The priority-filtered merge is:

mergeWithPrio([(p1,d1),,(pn,dn)])={dipi=minjpj}\text{mergeWithPrio}([(p_1, d_1), \ldots, (p_n, d_n)]) = \bigsqcup \{ d_i \mid p_i = \min_j p_j \}

This forms a lexicographic ordering: first compare priorities, then merge equal-priority values.

Formally, we have a stratified lattice:

LwithPrio=Nop×L\mathcal{L}_{\text{withPrio}} = \mathbb{N}^{\text{op}} \times \mathcal{L}

ordered by (p1,v1)(p2,v2)(p_1, v_1) \leq (p_2, v_2) iff p1p2p_1 \geq p_2 and (p1>p2p_1 > p_2 or v1v2v_1 \leq v_2).

Conditional merging (mkIf): The condition mechanism extends the lattice with a bottom element representing “not defined”:

Lconditional=L+{undef}\mathcal{L}_{\text{conditional}} = \mathcal{L} + \{ \bot_{\text{undef}} \}

with undefv=v\bot_{\text{undef}} \sqcup v = v and undef\bot_{\text{undef}} absorbing in the priority ordering.

The mkIf operation:

mkIf:BoolLLconditional\text{mkIf} : \text{Bool} \to \mathcal{L} \to \mathcal{L}_{\text{conditional}} mkIf(b,v)={vif b=trueundefif b=false\text{mkIf}(b, v) = \begin{cases} v & \text{if } b = \text{true} \\ \bot_{\text{undef}} & \text{if } b = \text{false} \end{cases}

Order control (mkBefore, mkAfter): For list-typed options, elements carry an order priority oNo \in \mathbb{N}. The merge operation first sorts by order priority, then concatenates:

mergeOrdered([(o1,xs1),,(on,xsn)])=concat(sortBy(π1,[(o1,xs1),,(on,xsn)]))\text{mergeOrdered}([(o_1, xs_1), \ldots, (o_n, xs_n)]) = \text{concat}(\text{sortBy}(\pi_1, [(o_1, xs_1), \ldots, (o_n, xs_n)]))

where sortBy(π1,_)\text{sortBy}(\pi_1, \_) sorts tuples by their first component.

Why this matters: The algebraic structure ensures:

  1. Associativity: Order of module evaluation doesn’t matter (up to priority ties)
  2. Commutativity: Module order doesn’t matter (except for lists without order annotations)
  3. Idempotence: Importing a module twice has the same effect as once (if no side effects)
  4. Composability: Submodules compose cleanly because they use the same merge algebra

The join-semilattice structure is the key to modular reasoning: you can understand each module’s contribution independently and combine them without global analysis.

Source reference: nixpkgs lib/types.nix (entire file, especially type definitions)

Option types serve two purposes:

  1. Runtime validation: Check that defined values match the expected shape (e.g., “is this actually an integer?”)
  2. Merge behavior specification: Define how to combine multiple definitions into a single value

Every option has a type (defaulting to types.unspecified if not given). The type’s check function validates values, and its merge function combines them.

Common types:

  • Primitives (int, str, bool, path): Validate structure, require all definitions to be equal
  • Containers (listOf, attrsOf): Recursively validate elements, concatenate or recursively merge
  • Submodules (submodule, submoduleWith): Nest entire module evaluations
  • Combinators (nullOr, either, enum): Logical combinations of types

Types form a little language for describing configuration schemas, similar to JSON Schema or TypeScript types, but with merge semantics baked in.

A type is an attribute set with specific attributes:

mkOptionType {
name = "descriptive-name";
description = "human-readable description";
check = value: /* returns true if value is valid */;
merge = loc: defs: /* combines list of definitions into single value */;
# Optional but important:
getSubOptions = prefix: /* for documentation and submodules */;
getSubModules = /* list of submodules if this is a submodule type */;
substSubModules = m: /* rebind submodules for type merging */;
typeMerge = functor: /* merge two types (e.g., two listOfs) */;
}

Type checking happens during merge:

mergedValue =
if isDefined then
if type.merge ? v2 then
if checkedAndMerged.headError or null != null
then throw "not of type ${type.description}"
else checkedAndMerged.value
else if all (def: type.check def.value) defsFinal
then type.merge loc defsFinal
else throw "not of type ${type.description}"
else throw "option has no value defined";

Type merging allows combining option declarations:

# Module 1
options.foo = mkOption { type = types.listOf types.int; };
# Module 2
options.foo = mkOption { type = types.listOf types.int; };
# Result: types are merged, option is declared once

The typeMerge function checks compatibility:

defaultTypeMerge = f: f':
if f.name != f'.name then null # incompatible
else if hasPayload then
if mergedPayload == null then null
else f.type mergedPayload # e.g., merge elemTypes
else f.type;

Submodule nesting creates recursive lattices:

types.submoduleWith { modules = [...]; }

When merged, creates a nested evalModules call with the submodule list, creating a recursive fixpoint:

merge = loc: defs:
let configuration = base.extendModules {
modules = allModules defs;
prefix = loc;
};
in configuration.config;

The type system forms a category Type\mathbf{Type} with:

  • Objects: Types τ\tau (potentially infinite sets equipped with merge operations)
  • Morphisms: Type refinements (subtyping relations)

Each type τ\tau defines:

  1. Value space: τValue\llbracket \tau \rrbracket \subseteq \text{Value} (the set of valid values)
  2. Merge algebra: (Mτ,τ)(\mathcal{M}_\tau, \sqcup_\tau) where Mτ\mathcal{M}_\tau is a join-semilattice
  3. Interpretation function: mergeτ:List(τ)τ\text{merge}_\tau : \text{List}(\llbracket \tau \rrbracket) \to \llbracket \tau \rrbracket

Type constructors are functors TypeType\mathbf{Type} \to \mathbf{Type}:

ListOf functor:

ListOf:TypeType\text{ListOf} : \mathbf{Type} \to \mathbf{Type} listOfα=List(α)\llbracket \text{listOf}\, \alpha \rrbracket = \text{List}(\llbracket \alpha \rrbracket) listOfα=concat:List(α)×List(α)List(α)\sqcup_{\text{listOf}\, \alpha} = \text{concat} : \text{List}(\llbracket \alpha \rrbracket) \times \text{List}(\llbracket \alpha \rrbracket) \to \text{List}(\llbracket \alpha \rrbracket)

AttrsOf functor:

AttrsOf:TypeType\text{AttrsOf} : \mathbf{Type} \to \mathbf{Type} attrsOfα=Nameα\llbracket \text{attrsOf}\, \alpha \rrbracket = \text{Name} \to \llbracket \alpha \rrbracket (fattrsOfαg)(n)={f(n)αg(n)ndom(f)dom(g)f(n)ndom(f)dom(g)g(n)ndom(g)dom(f)(f \sqcup_{\text{attrsOf}\, \alpha} g)(n) = \begin{cases} f(n) \sqcup_\alpha g(n) & n \in \text{dom}(f) \cap \text{dom}(g) \\ f(n) & n \in \text{dom}(f) \setminus \text{dom}(g) \\ g(n) & n \in \text{dom}(g) \setminus \text{dom}(f) \end{cases}

Submodule as fixpoint:

Submodule:List(Module)Type\text{Submodule} : \text{List}(\text{Module}) \to \mathbf{Type} submodule[m1,,mn]=μC.evalModules([m1,,mn],C)\llbracket \text{submodule}\, [m_1, \ldots, m_n] \rrbracket = \mu C.\, \text{evalModules}([m_1, \ldots, m_n], C)

This is a recursive type equation: the type of a submodule is defined as the fixpoint of evaluating its modules, which may themselves contain submodules.

Type merging as coproduct: When two modules declare the same option with different types, the system attempts to merge the types. This is a pushout in Type\mathbf{Type}:

τ1τ1τ2τ2τ1τ2\begin{array}{ccc} \tau_1 & \to & \tau_1 \sqcup \tau_2 \\ \downarrow & & \downarrow \\ \tau_2 & \to & \tau_1 \sqcup \tau_2 \end{array}

For compatible types (e.g., both listOf int), the pushout exists. For incompatible types (e.g., int and string), it doesn’t, and the system throws an error.

Constraint propagation: Type checking is interleaved with merging via the v2 merge protocol:

mergeτv2:List(τ)(τ+Error)\text{merge}_\tau^{v2} : \text{List}(\llbracket \tau \rrbracket) \to (\llbracket \tau \rrbracket + \text{Error})

returning either the merged value or a type error. This enables fine-grained error messages pointing to specific problematic definitions.

Categorical perspective: The type system implements a graded monad where:

  • The grade is the type τ\tau
  • The functor TτT_\tau maps values to “typed optional values”
  • The join operation μτ:Tτ(Tτ(A))Tτ(A)\mu_\tau : T_\tau(T_\tau(A)) \to T_\tau(A) is the merge function

The grading ensures type-safe composition: you can only merge values of compatible types.

Why types matter for composition: The type constraint lattice ensures:

  1. Local type checking: Each module’s definitions can be checked independently against declared types
  2. Compositional merging: Merge operations distribute over type constructors (e.g., merging two listOf int gives listOf int)
  3. Submodule isolation: Submodules can’t violate their parent’s type constraints
  4. Documentation generation: Types provide machine-readable schemas for automatic documentation

The type system transforms the module system from untyped attribute set merging into a typed configuration language with static guarantees.

While not a single primitive, the interaction between Nix’s lazy evaluation and the module system’s fixpoint computation deserves explicit treatment.

The module system’s “killer feature” is allowing modules to reference the final configuration while that configuration is still being computed. This works because Nix doesn’t evaluate expressions until their values are actually needed.

When you write:

{ config, ... }: {
services.foo.enable = config.services.bar.enable;
}

Nix doesn’t immediately try to look up config.services.bar.enable. Instead, it creates a thunk (a suspended computation). Later, when something needs the value of services.foo.enable, Nix evaluates the thunk, which triggers evaluation of config.services.bar.enable, which may trigger other evaluations, and so on.

As long as there’s no strict cycle (A needs B’s value before B is computed, and B needs A’s value before A is computed), lazy evaluation finds a consistent solution.

The fixpoint is established via Nix’s let rec binding:

let
# Simplified view of evalModules internals
config = mapAttrs (_: opt: opt.value) options;
options = /* compute options by evaluating modules with 'config' */;
in config

This is a mutually recursive definition. Nix resolves it by:

  1. Allocating thunks for both config and options
  2. When options is demanded, evaluate it, which may demand parts of config
  3. When parts of config are demanded, evaluate those specific attributes, which demands parts of options
  4. Continue until a consistent fixpoint is reached (or detect infinite recursion)

Infinite recursion detection: Nix tracks which thunks are currently being evaluated. If evaluating thunk A demands the value of thunk A (before A has finished), that’s infinite recursion:

# This fails:
{ config, ... }: {
services.foo.value = config.services.foo.value + 1;
}

But conditional recursion works:

# This works:
{ config, ... }: {
services.foo.value =
if config.services.bar.enable
then config.services.baz.value
else 42;
}

As long as the chain of demands terminates before cycling back, lazy evaluation succeeds.

The practical implication: You can write modules that make decisions based on what other modules decided, creating a declarative “logic programming” style configuration where the order of module evaluation doesn’t matter.

Terminology note: When describing the computational mechanism, we use “fixpoint” or “lazy fixpoint” to emphasize self-referential binding with demand-driven evaluation. “Least fixpoint” is appropriate only in formal domain-theoretic contexts (like this section) where we characterize the mathematical object. Nix does not compute via Kleene iteration ⊥, f(⊥), f²(⊥), ...—it uses let x = f x in x with lazy thunk resolution.

The fixpoint computation implements a domain-theoretic least fixpoint via Nix’s lazy evaluation strategy.

Scott domains: Nix values form a Scott domain—a partially ordered set where:

  1. Every directed subset has a least upper bound (join)
  2. The ordering represents “information content” (x\bot \sqsubseteq x means \bot is “less defined” than xx)

The bottom element \bot represents “not yet evaluated” (a thunk). As evaluation proceeds, thunks are replaced with more defined values.

Continuous functions: Each module defines a continuous function:

Fm:DConfigDConfigF_m : \mathcal{D}_{\text{Config}} \to \mathcal{D}_{\text{Config}}

where DConfig\mathcal{D}_{\text{Config}} is the Scott domain of configurations, and continuity means:

F(iIci)=iIF(ci)F(\bigsqcup_{i \in I} c_i) = \bigsqcup_{i \in I} F(c_i)

for any directed set {ci}iI\{c_i\}_{i \in I}.

Fixpoint theorem: For a continuous function F:DDF : \mathcal{D} \to \mathcal{D} on a pointed domain (D,,)(\mathcal{D}, \sqsubseteq, \bot), the least fixpoint exists and equals:

μF=n0Fn()\mu F = \bigsqcup_{n \geq 0} F^n(\bot)

Nix’s lazy evaluation implements this via demand-driven thunk resolution (not iteration):

  • All values start as thunks (unevaluated expressions)
  • When a value is demanded, its thunk is forced, which may demand other thunks
  • Memoization caches evaluated thunks to avoid recomputation
  • No explicit iteration sequence F0,F1,F2,F^0, F^1, F^2, \ldots is constructed

Well-founded recursion: The domain has finite height in the demanded slice (the portion of the configuration actually needed). This ensures:

n.Fn()=Fn+1()\exists n.\, F^n(\bot) = F^{n+1}(\bot)

meaning iteration converges in finitely many steps.

Thunk semantics: A thunk is a suspended computation—an unevaluated expression in the domain D\mathcal{D}. Thunks represent the bottom element \bot before evaluation; forcing a thunk either produces a value or another thunk (in case of lazy chains). The partiality aspect comes from non-termination: a thunk that loops forever never produces a value.

Infinite recursion as non-termination: A strict cycle creates a non-continuous function where:

F()=,F(F())=,F(\bot) = \bot, \quad F(F(\bot)) = \bot, \quad \ldots

The least fixpoint is \bot (undefined), and Nix’s evaluator detects this by tracking the call stack.

Categorical perspective: The fixpoint construction is a trace operation in the category of domains and continuous functions:

TrC:Dom(AC,BC)Dom(A,B)\text{Tr}_C : \mathbf{Dom}(A \otimes C, B \otimes C) \to \mathbf{Dom}(A, B)

For f:ACBCf : A \otimes C \to B \otimes C, the trace TrC(f):AB\text{Tr}_C(f) : A \to B is the fixpoint of the internal state CC.

In the module system:

  • AA is the external input (module parameters like pkgs)
  • BB is the output configuration
  • CC is the internal state (the configuration being computed)
  • ff is the module evaluation function
  • TrC(f)\text{Tr}_C(f) is evalModules, which ties the configuration back to itself

The trace operation implements the “knot-tying” that makes self-referential configurations work.

Why this enables modular reasoning: The fixpoint is deterministic (least fixpoint is unique) and depends only on the module functions, not evaluation order. This means:

  1. Module composition is order-independent (commutative)
  2. Adding modules is monotone (more modules = more defined configuration)
  3. Local reasoning is sound (understand each module in isolation, combine via fixpoint)

The domain-theoretic foundation ensures the module system’s declarative semantics are mathematically rigorous, not just “it works because Nix is lazy.”

The Nix module system’s algebraic primitives form a coherent mathematical structure:

  1. Deferred modules embed Kleisli category morphisms, enabling computation to reference fixpoint results; at the type level, they form a monoid under imports list concatenation
  2. evalModules computes the unique least fixpoint in domain-theoretic configuration lattices via demand-driven lazy evaluation, not classical Kleene iteration
  3. Merging primitives implement semantic-level join-semilattice operations with priority stratification (after fixpoint resolves references)
  4. Types define graded monads constraining merge algebras and enabling compositional reasoning
  5. Lazy evaluation realizes domain-theoretic fixpoints via demand-driven thunk evaluation, computing only demanded portions of the configuration

Together, these primitives transform attribute set merging into a powerful typed functional language for declarative configuration, where:

  • Composition is order-independent (associative, commutative)
  • Submodules nest cleanly (recursive fixpoints)
  • Overrides work predictably (priority lattice)
  • Circular dependencies resolve automatically (lazy fixpoint)

Understanding these algebraic foundations explains why the module system supports complex patterns (conditional imports, priority overrides, recursive submodules) while maintaining predictable semantics. The mathematics isn’t just theoretical—it’s the reason NixOS configurations compose reliably at scale.