Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

LINC Reference

LINC is the link and binary evidence layer in the parc -> linc -> gerc toolchain.

It owns evidence, not parsing and not lowering, but the crate surface today is broader than the preferred top-level story. Both the contract-first APIs and the older low-level IR/bootstrap APIs are still real.

What LINC Is For

LINC turns normalized source intent into native evidence. It can:

  • normalize declared link requirements
  • inspect object, archive, and shared-library artifacts
  • probe ABI-relevant layouts
  • validate declarations against binary reality
  • serialize the resulting evidence for downstream consumers

What LINC Produces

The main outputs are:

  • LinkAnalysisPackage
  • SymbolInventory
  • ResolvedLinkPlan
  • ValidationReport
  • AbiProbeReport

Those outputs are transportable evidence artifacts. The preferred modern consumer path is SourcePackage -> LinkAnalysisPackage, but LINC also still exposes BindingPackage and lower-level IR for direct inspection and staged work.

Data Flow

normalized source input
  -> linc analysis
  -> link/binary evidence artifacts
  -> downstream consumer

In practice the preferred input is SourcePackage, the preferred analysis entrypoint is analyze_source_package, and symbol/probe/validation are layered evidence on top.

Ownership Boundary

LINC owns:

  • the evidence model
  • the link surface
  • the validation story
  • the ABI probe story

LINC does not own:

  • parser internals
  • source preprocessing internals
  • Rust code generation
  • library-level composition with parc or gerc

Composition across packages belongs in tests, examples, or external harnesses.

Modules And APIs

The root APIs are:

  • analyze_source_package
  • inspect_symbols
  • probe_type_layouts
  • resolve_link_plan
  • validate

The root also still re-exports many low-level IR and report types, and tests exercise those paths directly.

The important modules are:

  • intake
  • analysis
  • link_plan
  • probe
  • symbols
  • validate
  • diagnostics
  • error

raw_headers still exists for repo-local bootstrap and fixture work. It is not the architectural center of the crate, but it is still a public low-level surface that the book needs to acknowledge honestly.

Reading Order

  1. Getting Started
  2. Intake Layer
  3. Header Processing
  4. IR Model
  5. Native Evidence
  6. API Contract
  7. End-To-End Workflows
  8. Operations And Release

Getting Started

This chapter shows the shortest path from a normalized source contract to machine-readable evidence.

Read linc as an analysis library. It produces evidence artifacts. It does not promise that every successful analysis is safe for final build execution or Rust generation.

In the toolchain split:

  • parc owns source meaning
  • linc owns link and binary meaning
  • gerc owns lowering and emitted build metadata

The boundary rule is strict: linc/src/** must not depend on parc or gerc, and cross-package translation belongs only in tests, examples, or external harnesses.

Add The Crate

Use a local path dependency while developing in the workspace:

[dependencies]
linc = { path = "../linc" }

If you need symbol inspection or validation, enable the symbols feature.

Minimal Example

use linc::{
    analyze_source_package,
    SourceDeclaration,
    SourceFunction,
    SourcePackage,
    SourceType,
};

fn main() {
    let mut source = SourcePackage::default();
    source.declarations.push(SourceDeclaration::Function(SourceFunction {
        name: "mylib_init".into(),
        parameters: vec![],
        return_type: SourceType::Int,
        variadic: false,
        source_offset: None,
    }));

    let analysis = analyze_source_package(&source);
    println!(
        "declared link inputs: {}",
        analysis.declared_link_surface.ordered_inputs.len()
    );
    println!(
        "has resolved plan: {}",
        analysis.resolved_link_plan.is_some()
    );
}

The preferred output contract is LinkAnalysisPackage.

JSON Round Trip

LinkAnalysisPackage is the contract intended to be exchanged across tools.

#![allow(unused)]
fn main() {
use linc::{analyze_source_package, LinkAnalysisPackage, SourcePackage};

let analysis = analyze_source_package(&SourcePackage::default());
let json = serde_json::to_string_pretty(&analysis).unwrap();
let restored: LinkAnalysisPackage = serde_json::from_str(&json).unwrap();
assert_eq!(analysis, restored);
}

Common Integration Pattern

The common pattern is:

  1. produce a SourcePackage in parc or another frontend
  2. call analyze_source_package
  3. optionally inspect artifacts with inspect_symbols
  4. optionally validate against those artifacts
  5. pass SourcePackage plus LinkAnalysisPackage to downstream tooling

If parc emits a serialized source artifact, a test, example, or external harness should decode and translate it before calling linc.

First Things To Inspect

When an analysis result does not look right, inspect these fields first:

  • analysis.declared_link_surface
  • analysis.resolved_link_plan
  • analysis.diagnostics
  • analysis.abi_probe
  • analysis.validation
  • analysis.symbol_inventories

Those surfaces usually tell you whether the problem is source intake, ABI probing, link metadata declaration, provider discovery, or validation.

Library-Only Design

linc is intended to be consumed as a Rust library that owns only link and binary evidence concerns.

That means:

  1. call analyze_source_package() or other public APIs directly
  2. serialize the resulting values if another tool needs artifacts
  3. keep cross-package translation in tests/examples/harnesses
  4. keep final generation and build policy in downstream tools rather than in linc

Intake Layer

The intake layer is LINC’s frontend-neutral source contract.

It defines what LINC needs from an upstream frontend without coupling to any specific parser AST or source extraction implementation.

SourcePackage

The primary intake type is SourcePackage.

An upstream frontend such as parc produces this after scanning and extracting source-level information.

#![allow(unused)]
fn main() {
use linc::{
    analyze_source_package,
    SourceDeclaration,
    SourceFunction,
    SourcePackage,
    SourceType,
};

let mut source = SourcePackage::default();
source.source_path = Some("mylib.h".into());
source.declarations.push(SourceDeclaration::Function(SourceFunction {
    name: "init".into(),
    parameters: vec![],
    return_type: SourceType::Int,
    variadic: false,
    source_offset: None,
}));

let analysis = analyze_source_package(&source);
assert!(analysis.resolved_link_plan.is_some() || analysis.diagnostics.len() >= 0);
}

Declaration Types

The intake layer supports these declaration kinds:

  • SourceFunction for function declarations
  • SourceRecord for struct/union declarations
  • SourceEnum for enum declarations with variants
  • SourceTypeAlias for typedef and alias declarations
  • SourceVariable for external variable declarations

Records may be opaque when fields is None.

Type Model

SourceType is a simplified, language-neutral type representation. It is not a full lossless C type system.

It covers:

  • primitive types such as Void, Bool, Char, Int, UInt, and Long
  • pointers and const pointers
  • arrays
  • function pointers
  • references to typedefs, records, and enums
  • Const and Volatile wrappers

Intake Contract

The intended intake path is:

  1. produce SourcePackage
  2. call analyze_source_package
  3. consume LinkAnalysisPackage

Any adapter code that converts a serialized source artifact into SourcePackage belongs in tests, examples, or an external harness.

Design Principles

  1. LINC core logic should say “analyze this normalized source surface”, not “parse this”
  2. adapter code is separate from core analysis logic
  3. parc may be used in tests, but not as a library-level dependency of LINC
  4. another frontend should be able to replace parc without rewriting LINC

Header Processing

HeaderConfig is a repo-local bootstrap utility for turning raw header sets into a BindingPackage.

It exists because the repository still needs a way to start from real headers in difficult test and bootstrap scenarios. It is not the architectural center of LINC, but it is still real public API and still covered by tests.

The intended architecture is:

  • an upstream frontend such as parc owns preprocessing, parsing, and declaration extraction
  • LINC consumes normalized source input and produces evidence
  • cross-package translation belongs outside linc/src/**

What HeaderConfig Is Good For

Use HeaderConfig when you need to:

  • bootstrap the repository from real system or vendored headers
  • drive difficult header fixtures without teaching another frontend every edge case first
  • gather preprocessing output, extracted declarations, native link metadata, and probe evidence in one local pass

It is not the preferred downstream boundary.

Conceptual Domains

Even though HeaderConfig is one builder, it carries several distinct domains:

  1. preprocessing environment
  2. entry-header selection
  3. declared native-link intent
  4. ABI probe requests
  5. origin-filtering policy

Configuration Surface

The most important builder methods are:

MethodPurpose
header(path)Add an entry header
include_dir(path)Add an include search path
framework_dir(path)Add a framework search path
library_dir(path)Add a native library search path
define(name, value)Add a preprocessor define
compiler(cmd)Override the driver used for preprocessing or probing
flavor(f)Select dialect handling
origin_filter(f)Keep only declarations from selected origins
no_origin_filter()Keep declarations from every origin
probe_type_layout(name)Request compiler-probed layout data

Repeated path, define, link, constraint, and probe calls append in order. The builder does not deduplicate for you.

Validation Before Execution

The bootstrap path validates its inputs before it executes. Treat invalid configuration as an operational error, not as a diagnostic hidden inside a usable result.

What The Bootstrap Path Does

The bootstrap helpers are for local repository work and test fixtures:

  1. synthesize a temporary translation unit from the configured entry headers
  2. preprocess it with the configured compiler and dialect settings
  3. capture macros from the same environment
  4. extract declarations and attached metadata
  5. attach target, input, and declared link provenance
  6. optionally probe requested layouts
  7. optionally filter by origin

The resulting package is a bootstrap artifact built around BindingPackage, not the preferred downstream boundary.

Policy

If you are writing new downstream code:

  • do not treat HeaderConfig as the pipeline contract
  • do not move cross-package translation into linc/src/**
  • do not build new docs or examples around this path unless the point is specifically repository bootstrap

Use it when it helps the repository analyze difficult headers. Do not mistake it for the long-term boundary between packages.

Compiler And Flavor

LINC uses the compiler as a preprocessor and ABI probe driver.

Flavor affects parsing expectations and extension handling:

  • GnuC11
  • ClangC11
  • StdC11

In general:

  • use ClangC11 when the header stack is written for Clang tooling
  • use GnuC11 when the project assumes GCC-style C extensions
  • use StdC11 only when you want a stricter source profile

The bootstrap path can record the native inputs that the extracted API expects. These declarations are preserved in the resulting package. The bootstrap path does not link anything by itself; it records intent and normalized link surface.

Layout Probing During Scan

You can request ABI layout facts directly in the bootstrap configuration. The resulting package will include layout evidence when probe requests succeed.

IR Model

The primary internal data model in LINC is BindingPackage.

This is the durable evidence artifact used for:

  • JSON serialization
  • validation against native artifacts
  • link-plan construction
  • evidence hand-off to downstream tools

Module Organization

The IR is split into focused submodules:

  • ir::types for declarations, functions, records, enums, typedefs, and variables
  • ir::link for native link surfaces and provider matching
  • ir::macros for preprocessor macro capture and classification

The crate root intentionally stays narrower and exposes workflow-facing entry points plus the small cross-crate contracts. Consumers that need the detailed binding IR should import it from linc::ir directly.

Top-Level Shape

At a high level, a BindingPackage contains:

  • schema_version
  • linc_version
  • target
  • inputs
  • macros
  • layouts
  • link
  • provenance
  • macro_provenance
  • effective_macro_environment
  • source_path
  • items
  • diagnostics

This matters because the package is not just “the declarations”. It is the declaration surface plus the environment needed to interpret it.

target

target stores information about the scan environment:

  • target triple
  • compiler command
  • compiler version
  • flavor

These fields are descriptive rather than prescriptive. They help downstream tooling understand what environment produced the package.

inputs

inputs records the source-side configuration of the scan:

  • entry_headers
  • include_dirs
  • defines

That is useful for reproducibility, debugging, downstream rebuild decisions, and comparing packages produced from different header environments.

BindingLinkSurface is the normalized native-link surface attached to the package. It preserves:

  • preferred link mode
  • native surface kind
  • platform constraints
  • include, framework, and library paths
  • declared libraries, frameworks, and artifacts
  • original ordered inputs

This is evidence about the native surface, not a build system of its own.

items

items is the core declaration surface.

Supported variants:

VariantMeaning
FunctionC function declaration
Recordstruct or union
EnumC enum with named variants
TypeAliastypedef
Variableextern/global variable
Unsupportedrecognized but not fully representable construct

Downstream tools should not ignore Unsupported blindly. Those entries are signals that the source surface contains shapes LINC saw but could not faithfully lower.

BindingType

BindingType represents the type graph used throughout functions, fields, typedefs, and variables.

Primitive types include:

  • Void
  • Bool
  • Char
  • SChar
  • UChar
  • Short
  • UShort
  • Int
  • UInt
  • Long
  • ULong
  • LongLong
  • ULongLong
  • Float
  • Double
  • LongDouble

Compound/reference forms include:

  • Pointer
  • Array
  • FunctionPointer
  • TypedefRef
  • RecordRef
  • EnumRef
  • Opaque

Pointer constness is modeled on the pointee so the IR can distinguish char *, const char *, and char * const without pretending to be a full C semantic model.

Functions

FunctionBinding contains:

  • name
  • calling_convention
  • parameters
  • return_type
  • variadic
  • source_offset

Current calling convention coverage is conservative. When the extractor sees recognized declaration attributes such as stdcall, cdecl, fastcall, vectorcall, or thiscall, FunctionBinding.calling_convention preserves that evidence instead of flattening everything to plain C.

Records

RecordBinding represents both struct and union.

It carries:

  • kind
  • name
  • fields
  • optional representation evidence
  • source_offset

Opaque records are represented by fields: None. That means the type exists by name but layout and fields are intentionally unavailable.

Each FieldBinding may also carry bit_width. When bit_width is present, the field is a bitfield and LINC preserved that width as partial evidence even if full ABI placement is not yet available.

FieldBinding.layout is the companion ABI-evidence slot for compiler-probed field placement. It mainly carries offset_bytes when record field probing has been requested and succeeded.

When available from compiler probing, RecordBinding.representation preserves size, align, and completeness.

RecordBinding.abi_confidence is the higher-level summary of how much ABI evidence was attached.

Enums

EnumBinding stores:

  • enum name
  • variants
  • optional representation evidence
  • source offset

Each variant carries name and optional value. If a value is absent, downstream tooling should not invent one without understanding the original source and evaluation context.

When available from compiler probing, EnumBinding.representation preserves underlying size and signedness.

Macros And Provenance

The IR also carries supporting evidence:

  • macro capture and classification
  • compiler-probed type layouts
  • provenance for declarations and macros
  • effective macro environment snapshots

Do not think of the IR as “just declarations”. It is the full analysis record.

Serialization Rules

  • keep container fields defaultable
  • preserve declaration order where the contract depends on it
  • treat additive fields as the normal evolution path
  • serialize deterministic JSON in tests and fixtures

What The IR Is Not

The IR is not:

  • a parser AST
  • a Rust codegen AST
  • a shared ABI crate
  • a build graph

It is the LINC evidence model. evidence.

For typedef-style named types, TypeAliasBinding.abi_confidence records whether the alias remains declaration-only or now has compiler-probed layout evidence attached.

TypeAliasBinding.canonical_resolution is the alias-normalization slot. When present, it preserves:

  • the typedef names crossed during alias chasing
  • the terminal non-alias BindingType that downstream consumers can treat as the canonical shape

BindingType now also carries explicit qualifier metadata beyond the old pointer-const shortcut:

  • pointer nodes preserve pointer-self qualifiers
  • BindingType::Qualified preserves top-level const / volatile / restrict / atomic evidence

The current Rust codegen layer still lowers most qualifiers conservatively, but downstream library consumers no longer need to reconstruct that evidence from diagnostics alone.

JSON Transport

BindingPackage remains the durable transport artifact, but LINC no longer wraps JSON transport in crate-specific helper functions.

Use serde_json directly at the tool boundary:

#![allow(unused)]
fn main() {
use linc::ir::BindingPackage;

let json = serde_json::to_string_pretty(&package)?;
let restored: BindingPackage = serde_json::from_str(&json)?;
}

That keeps the artifact story centered on the data contract itself rather than convenience helpers.

Variables

VariableBinding captures global variables and extern declarations with:

  • name
  • ty
  • source_offset

These are validated separately from functions because symbol-kind mismatches matter.

Unsupported Items

UnsupportedItem should be treated as a first-class signal.

It usually means one of these:

  • the syntax was recognized
  • extraction could not preserve enough structure
  • a diagnostic was emitted
  • the binding package is partial, not complete

This is a safer design than silently dropping source constructs.

Even though declarations are the center of the package, three other surfaces often matter just as much:

  • macros
  • layouts
  • link

That is why they live at the package level. They are package-wide evidence, not per-item add-ons.

Declaration Provenance

provenance is a package-level list aligned with items.

Each entry may currently carry:

  • item_name
  • item_kind
  • source_offset
  • source_origin
  • source_location

This is intentionally additive evidence. It gives downstream tooling a stable way to talk about where a declaration came from without rewriting the declaration IR around source metadata.

Serialization Rules

The entire package is serde serializable.

#![allow(unused)]
fn main() {
use linc::ir::BindingPackage;

let json = serde_json::to_string_pretty(&package)?;
let restored: BindingPackage = serde_json::from_str(&json)?;
}

Important artifact behavior:

  • the package carries a schema_version
  • unknown future schema versions are rejected
  • older JSON missing newer optional fields generally deserializes with defaults

That makes the package suitable for machine-to-machine contracts.

What The IR Is Not

The IR is useful, but it is intentionally not:

  • a full semantic C type system
  • a full ABI proof
  • a final link plan
  • a final code generation contract for every target

Use it as the normalized source of truth for downstream decisions, not as a claim that all C semantics have been solved.

Origin Filtering

By default, LINC does not blindly keep every declaration found after preprocessing. It uses source-origin information to keep the extracted surface focused on the headers you asked for.

This behavior is one of the reasons scans stay usable on real systems with deep transitive header trees.

The Problem Filtering Solves

A normal header often pulls in:

  • C runtime headers
  • platform SDK headers
  • project-local support headers
  • unrelated transitive declarations

If all of that were kept by default, a scan of one library header could explode into a large, noisy package dominated by system declarations.

How Origin Tracking Works

The C preprocessor emits line markers such as:

# 42 "/usr/include/stdio.h" 3

LINC parses those markers into a FileOriginMap. That map is then used to classify declaration offsets.

Current origin classes are:

OriginMeaning
EntryFrom an entry header explicitly requested by the user
UserIncludeFrom a non-system header included by an entry header
SystemFrom a system header
UnknownThe origin could not be determined reliably

Default Behavior

The default OriginFilter keeps entry-header declarations and user-include declarations, and excludes system-header declarations.

This is usually the right tradeoff for evidence generation because it preserves the API surface while avoiding C runtime clutter.

Disable Filtering Entirely

Disable filtering when you want the complete preprocessed declaration world. That is useful for debugging missing declarations or validating whether a declaration really exists after preprocessing.

Custom Filters

Custom filters are useful when system declarations are intentionally part of the bindable contract.

Practical Advice

If a declaration seems to be missing:

  1. rerun with .no_origin_filter()
  2. inspect the preprocessing report
  3. confirm the declaration was not removed by macro conditions
  4. confirm the declaration still maps cleanly to a known origin

Most missing-item surprises come from one of those four causes.

Why Filtering Happens After Extraction

LINC first extracts from the parsed translation unit and then filters by origin.

That design has two benefits:

  • extraction logic sees the same full parse tree the compiler saw
  • filtering policy stays configurable and separate from parsing

It also means you can inspect the same source through multiple origin policies without changing preprocessing itself.

Macros And Layouts

Two of the most important “not just declarations” surfaces in LINC are macro inventory and compiler-probed type layouts.

Together they close a large part of the gap between syntax-only header extraction and ABI-aware analysis.

Macro Inventory

BindingPackage.macros captures macro definitions seen during raw-header or bootstrap scans.

Each MacroBinding carries:

  • name
  • body
  • function_like
  • form
  • kind
  • category
  • optional parsed value for bindable integer/string constants

BindingPackage.macro_provenance carries package-level provenance entries for captured macros, including origin classification and source location where line-marker evidence is available.

Macro Kind

Current kinds are:

  • Integer
  • String
  • Expression
  • Other

This is a structural classification of the macro body.

Macro Category

Current categories are:

  • BindableConstant
  • ConfigurationFlag
  • AbiAffecting
  • Unsupported

This is a higher-level classification intended to help downstream consumers decide which macros are relevant.

Why Macro Capture Matters

Many real C APIs encode essential information in macros:

  • integer constants
  • version identifiers
  • feature toggles
  • calling-convention selectors
  • export/import annotations
  • ABI-affecting packing or configuration knobs

Without macros, a binding package is often incomplete even if declaration extraction succeeded.

Practical Macro Interpretation

Downstream tools should usually treat categories differently:

  • BindableConstant: good candidates for generated constants
  • ConfigurationFlag: environment and availability signals
  • AbiAffecting: do not ignore; these may change layout or calling behavior
  • Unsupported: evidence worth reporting, not blindly generating

Layout Probing

TypeLayout currently stores:

  • name
  • size
  • align

The layouts are produced by compiler-assisted probing. That means they reflect the configured compiler environment rather than guessed sizes.

AbiProbeReport also preserves target/compiler identity metadata alongside the layouts. That makes probe evidence auditable and safer to hand across process or repo boundaries.

Probe Subjects

The report also carries subjects. Each ProbeSubjectReport keeps:

  • the requested subject name
  • its broad subject kind (Type, Record, or Enum)
  • probe confidence
  • record completeness when the subject is a record
  • the measured TypeLayout

For record subjects, fields may also preserve named field offsets as compiler-measured evidence.

For bitfields, the current probe surface is intentionally partial:

  • bit_width may be present
  • offset_bytes may remain absent

That is deliberate. LINC preserves width evidence where it can, but does not guess a byte offset for bitfields when the probe path cannot establish one safely.

Probe Degradation Semantics

Probe requests do not all fail for the same reason.

  • ProbeUnavailable means the requested subject did not have a safely probeable layout in the current compilation model
  • ProbeFailed means the probe mechanism itself failed operationally or compiled invalid probe input

That split lets a downstream generator apply a policy such as:

  • tolerate ProbeUnavailable for explicitly opaque inputs
  • require layouts for by-value ABI-sensitive records and typedef-backed value types
  • treat any ProbeFailed result as suspicious until the probe path is fixed or explicitly waived

Enum subjects also preserve:

  • enum_underlying_size
  • enum_is_signed

This gives downstream generators a concrete representation hint even before field-level enum analysis exists in the declaration IR.

What Layouts Solve

Compiler-probed layouts are especially useful for:

  • checking that opaque vs non-opaque modeling matches reality
  • proving sizeof and alignof for important structs
  • gating generation on ABI-sensitive records
  • preserving ABI evidence in a transportable JSON package

What Layouts Do Not Yet Solve

Current layout data is intentionally small. It does not yet provide a full field-offset or bitfield-layout model.

So treat TypeLayout as stronger than guessing, but not yet a complete ABI proof for all record shapes.

Native Evidence

This section groups the parts of LINC that compare source-side intent against native artifact-side evidence.

Read this section when you care about:

  • what symbols an artifact exports or imports
  • what native link inputs a package declares
  • whether declarations and artifacts agree strongly enough for downstream use

The normal reading order inside this section is:

  1. Symbol Inventories
  2. Link Surface
  3. Validation

Use this path when you are moving from “I have a source contract” to “I trust this native surface enough to generate and link against it”.

The architectural rule stays the same here too:

  • LINC consumes source-shaped input
  • LINC emits evidence artifacts
  • downstream generation still happens elsewhere

Symbol Inventories

When the symbols feature is enabled, LINC can inspect native artifacts and produce a SymbolInventory.

This is the artifact-side counterpart to the source-side evidence package.

Why Symbol Inventories Matter

Header extraction tells you what the C surface claims exists. Artifact inspection tells you what a native file actually exports or imports.

You need both when you want to answer questions such as:

  • does this library really provide the declarations I scanned?
  • which artifact satisfies a symbol?
  • is the symbol hidden, weak, or duplicated?
  • what shared-library dependencies does this artifact declare?

Entry Point

#![allow(unused)]
fn main() {
use linc::inspect_symbols;

let inventory = inspect_symbols("build/libdemo.so").unwrap();
}

Supported Artifact Shapes

Current artifact coverage includes:

Platform formatTypical filesKinds
ELF.o, .a, .soobject, static library, shared library
Mach-O.o, .a, .dylibobject, static library, dynamic library

The inventory also classifies the artifact at a higher level.

Current metadata includes:

  • format
  • platform
  • kind
  • capabilities
  • dependency_edges
  • symbols

Artifact Capabilities

capabilities currently capture whether an artifact exports symbols or imports symbols.

That distinction matters for differentiating linkable providers from dependency-only inputs.

Symbol Entries

Each SymbolEntry carries:

  • normalized name
  • optional raw_name
  • direction
  • visibility
  • whether it is a function or variable-like symbol
  • binding
  • optional size
  • optional section
  • optional archive_member
  • optional reexported_via
  • optional alias_of

Normalized vs Raw Name

The normalized name is used for matching declarations. The raw name preserves the original artifact spelling.

direction is also important: only exported symbols are candidate providers during validation. Imported symbols are still preserved because they matter for shared-library and link-planning analysis.

alias_of is preserved when LINC can see more than one exported symbol name resolving to the same section or address identity.

ELF Symbol Versions

On ELF artifacts, SymbolEntry.version preserves symbol-version evidence when the object reader can see it.

Downstream consumers should read that evidence conservatively:

  • version presence is useful provider metadata
  • version absence is not proof that the symbol is unversioned everywhere
  • version equality helps distinguish exports that share a base symbol name
  • version differences should be treated as a reason to avoid collapsing providers too aggressively

LINC does not implement a full ELF linker/version-script resolver. It keeps the version strings as evidence and leaves policy to downstream consumers.

Archive Member Provenance

For static libraries, LINC preserves the member path or name that provided each symbol when available.

That lets downstream validation report a provider more precisely than just the archive path.

Shared-Library Dependency Edges

On ELF shared libraries and executables, LINC captures DT_NEEDED dependencies into dependency_edges.

This is not a full dynamic-loader model. It is still useful because it exposes artifact-declared native dependencies in the inventory itself.

When LINC sees imported symbols inside a shared library or executable, it also preserves symbol-local reexported_via evidence using those dependency edges.

Platform Behavior Notes

Mach-O commonly prefixes external symbols with _. LINC normalizes those names so C declarations and native symbols compare more naturally.

That normalization is intentionally paired with raw_name preservation so no spelling evidence is lost.

Mach-O support should still be read conservatively:

  • imported symbols are useful dependency evidence, not proof of final loader behavior
  • re-export inferences are narrower than a full dyld model
  • framework and install-name semantics remain downstream policy concerns
  • normalized names are for matching, while raw_name stays the authoritative original spelling

Mach-O Limits And Conservative Provider Policy

Downstream consumers should treat Mach-O provider evidence more conservatively than straightforward ELF export evidence.

That is not because the current inventories are weak. It is because Mach-O linking and loading semantics often depend on more context than a plain symbol table can prove by itself.

Important examples:

  • install names are loader identity, not just filenames
  • frameworks are resolved through a different search model than plain libraries
  • re-export chains can involve dependency structure outside the immediate artifact
  • symbol spelling and visibility evidence are useful, but not a complete dyld decision procedure

When To Use Inventories Directly

Use inspect_symbols(...) directly when:

  • you want to debug a native artifact before validating bindings
  • you need artifact metadata without having headers available
  • you want to compare two builds of the same native library
  • you need archive-member or dependency-edge evidence for a linker-oriented workflow

Link Surface

BindingPackage.link is the normalized native-link surface attached to a scan.

This is one of the most important pieces of LINC because evidence generation alone is not enough. Downstream tools also need to know what native inputs are expected at link time.

BindingLinkSurface currently carries:

  • preferred_mode
  • native_surface_kind
  • platform_constraints
  • include_paths
  • framework_paths
  • library_paths
  • libraries
  • frameworks
  • artifacts
  • ordered_inputs

This deliberately preserves both normalized buckets, such as libraries, and original ordering information, via ordered_inputs.

Why Ordered Inputs Matter

Link order can be semantically significant, especially with static archives, mixed object/archive inputs, and linkers that resolve left-to-right.

If LINC only preserved deduplicated buckets, a downstream tool could lose the original intended order and silently produce a different result.

Declared Libraries

Library-name inputs are recorded with a name, a kind, and a source.

Kinds:

  • Default
  • Static
  • Dynamic

Provider matching for declared library names is intentionally tolerant of ordinary platform naming shapes.

Concrete Artifacts

When the binding surface depends on explicit files instead of library names, use artifact inputs.

Each artifact preserves path, kind, and source. That is important for vendored or generated native inputs that are not discoverable through a generic -lfoo model.

Framework Inputs

For Apple-style native dependencies, frameworks are preserved separately from ordinary library names because they are resolved differently by downstream toolchains.

preferred_mode captures the scan-time preference between default, preferred static, and preferred dynamic.

This is not the same as hard pinning every input. It is a policy hint attached to the package.

Native Surface Kind

native_surface_kind classifies the package at a higher level:

  • HeaderOnly
  • LibraryNames
  • ConcreteArtifacts
  • Mixed

This gives downstream consumers a quick decision point.

Requirement Provenance

Link requirements preserve a source:

  • Declared
  • Inferred
  • Discovered

That distinction matters because downstream tooling often wants to trust user declarations more than inferred guesses while still preserving discovered evidence for reporting and future planning.

Platform Constraints

platform_constraints are package-level target applicability hints.

Today they are strings rather than a rich constraint language. That still makes them useful for simple target gating, downstream filtering, and build-graph selection.

Most consumers should read the link surface directly from BindingPackage. That keeps link-planning policy in the downstream library or tool that consumes LINC.

Normalized Plan Artifact

ResolvedLinkPlan is the normalized planning artifact. It is intentionally not a full filesystem-resolved linker invocation.

When inventories are available, consumers can separate declared requirements from candidate providers.

  • Resolved
  • Ambiguous

When providers come from inspected shared libraries, their dependency edges are also preserved in the plan so downstream tooling can see the current known transitive native surface without losing the distinction between declared requirements and discovered dependency evidence.

That means a planning inventory can legitimately resolve against macOS text stubs such as /usr/lib/libSystem.tbd even when later runtime or deployment policy is handled somewhere else.

Requirement and provider provenance are also preserved explicitly:

  • requirement source stays attached from the declared package metadata
  • provider provenance distinguishes exact declared-artifact matches from discovered inventory-based matches

Validation

Validation compares a BindingPackage against one or more SymbolInventory values.

It answers a practical question: do the declarations we extracted line up with what the native artifacts actually provide?

API Entry Points

Use validate for one artifact and validate_many for several.

What Validation Looks At

Validation focuses on symbol presence, symbol kind, visibility, binding strength, decorated names, and conservative ABI-shape evidence where the artifact can prove something honestly.

Common Statuses

Current statuses include:

  • Matched
  • AbiShapeMismatch
  • Missing
  • UnresolvedDeclaredLinkInputs
  • DecorationMismatch
  • NotAFunction
  • NotAVariable
  • Hidden
  • WeakMatch
  • DuplicateProviders

How To Read A Report

  • Matched means the declaration resolved to a visible symbol of the expected kind
  • Missing means no matching symbol was found and the package did not declare native link inputs that might reasonably have provided it
  • UnresolvedDeclaredLinkInputs means the package did declare native inputs, but validation still found no provider
  • DecorationMismatch means a decorated or raw spelling normalized to the declaration name
  • Hidden and WeakMatch should usually be treated more conservatively than a strong export
  • DuplicateProviders usually blocks promotion until the consumer chooses a policy

Provider Evidence

Provider evidence may include plain artifact paths or archive-member provenance such as libfoo.a:bar.o.

Consumer Rule

Validation findings are structured evidence, not hard execution errors. Treat them as policy input for the next stage.

API Contract

This chapter defines the intended public library surface of LINC as it exists today, not as we might wish it already looked.

First Principle

LINC is a library crate. The intended downstream pattern is:

  1. call the crate from Rust
  2. obtain structured values such as LinkAnalysisPackage, SymbolInventory, and ValidationReport
  3. serialize those values only when another tool or process boundary needs them

Preferred Public Surface

The crate root is still the preferred consumer boundary, but there are two real layers inside it:

  1. preferred contract-first APIs
  2. lower-level IR/bootstrap APIs that remain public

Normative Rules For Consumers

If you are building on top of LINC:

  1. prefer crate-root re-exports over deep module imports
  2. use analyze_source_package as the normal contract-first entry point
  3. treat LinkAnalysisPackage, SymbolInventory, and ValidationReport as the primary transport-level contracts
  4. treat diagnostics and validation results as normal structured output, not as ad hoc log text
  5. do not rely on exact String error text for durable control flow
  6. do not treat extracted declarations alone as sufficient ABI proof for layout-sensitive generation

Public Surface Tiers

  • Tier 1: analyze_source_package, inspect_symbols, probe_type_layouts, validate, validate_many, and LinkAnalysisPackage
  • Tier 2: BindingPackage, root-level IR re-exports, and modules such as probe, symbols, validate, and raw_headers
  • Tier 3: support-oriented modules such as diagnostics, error, and line_markers

Tier 2 and Tier 3 are real and tested. They are just not the first story the book wants new downstream users to build around.

Explicit Non-Goals

The current contract does not yet guarantee typed operational errors across the whole crate, full ABI completeness for all C constructs, or full cross-platform parity across ELF, Mach-O, and Windows-native artifact formats.

It also does not guarantee that repo-local bootstrap flows are the preferred architecture, even though they are public.

Artifact Boundary Reminder

LINC owns evidence, not universal pipeline state. Cross-package translation belongs only in tests/examples/harnesses.

If another chapter sounds broader than this one, treat this chapter as the normative boundary.

Contracts And Policy

This section groups the durable contract and policy chapters.

The most important policy rule is architectural:

  • linc/src/** must not depend on parc or gerc
  • cross-package translation belongs in tests/examples/harnesses
  • linc owns its own internal model and its own evidence artifacts
  • there is no shared ABI crate
  • there is no backward-compatibility burden for old pipeline shapes
  • bootstrap utilities and repo-local shortcuts must never be mistaken for the public contract

Read these chapters as the narrow contract surface. If another chapter sounds broader than this section, trust this section.

JSON Artifacts

This chapter describes how linc treats serialized JSON artifacts.

The important framing is architectural:

  • JSON is an artifact format
  • it is not a promise to preserve old pipeline shapes forever
  • the only shapes that matter are the ones currently documented and tested

linc does not carry a backward-compatibility burden for discarded designs.

First Principle

Consumers may depend on documented field names, documented field meanings, schema_version, and documented defaulting behavior when tests rely on it.

Consumers must not depend on whitespace, pretty-print layout, incidental field ordering, or undocumented fields that happen to be present today.

Main Serialized Artifacts

The main JSON-bearing values are:

  • LinkAnalysisPackage
  • SymbolInventory
  • ValidationReport
  • ResolvedLinkPlan

Version Fields

schema_version is the artifact gate. linc_version identifies the producing build and is useful for provenance.

Change Policy

  • do not preserve obsolete artifact envelopes just because they existed earlier
  • do not keep old field layouts alive unless current tests still need them
  • do make semantic changes explicit in docs and fixtures
  • do update schema_version when consumers would otherwise misread the artifact

Maintenance Rule

When artifact shapes change:

  1. update the docs
  2. update or replace the relevant fixtures
  3. update consumers in tests/examples/harnesses
  4. do not carry a dead compatibility layer just to keep an obsolete shape deserializable

Error Surface

This chapter inventories the current public error surface of LINC.

Current State

LINC exposes typed errors via LincError and structured diagnostics inside returned data.

Typed Error Surface Today

The clearest typed error boundary today is around the explicit workflow APIs such as probe_type_layouts(...) and inspect_symbols(...).

What Consumers Should Do Right Now

Downstream users should:

  • treat successful return values as stable enough to consume
  • treat diagnostics in returned data structures as first-class signals
  • avoid matching exact error strings for durable control flow
  • wrap stringly errors at their own boundary if they need structured handling immediately

What Counts As An Error vs A Diagnostic

  • hard operational failures generally return an error
  • partially understood source constructs often become diagnostics attached to a returned package
  • validation findings are reported as structured match results, not thrown as errors

Error Taxonomy

This chapter defines the intended typed error taxonomy that later implementation slices should converge on.

Goal

LincError should be the crate-wide typed failure surface for operational failures.

The design target is to separate configuration and scan execution failures, preprocessing and parse failures, ABI probe failures, artifact inspection failures, and serialization or schema failures.

Validation findings are deliberately different. They should remain structured report output, not thrown as operational errors.

Current Coverage

The current enum already contains variants for missing headers, preprocessor failure, parse failure, I/O failure, serialization failure, symbol-read failure, unsupported artifact format, and schema-version mismatch.

Intended Category Boundaries

  • configuration failure should be distinguishable from compiler failure
  • consumers should be able to distinguish toolchain invocation failure from source parse failure
  • probe failures should not be collapsed into generic scan or I/O text
  • path and context should be preserved in typed artifact errors

Validation Is Not An Error Channel

Validation mismatches should not be encoded as LincError. Validation should keep returning ValidationReport.

Field Stability

This chapter classifies the current BindingPackage artifact by stability expectations.

Top-Level BindingPackage

The top-level package fields fall into three practical groups:

  • contract identity fields
  • stable container fields
  • evolving evidence fields

Contract Identity Fields

FieldCurrent classificationNotes
schema_versionrequired contract fieldartifact-shape gate
linc_versionstable provenance fieldproducer version, not the main shape gate
source_pathuseful provenance fieldhelpful, but not the primary artifact anchor

Stable Container Fields

The major package sections downstream tools can reasonably depend on existing are target, inputs, macros, layouts, link, items, and diagnostics.

Practical Rule For Downstream Consumers

Rely on top-level package sections and documented meanings, treat nested metadata as additive/defaultable unless explicitly documented otherwise, and use schema_version as the hard artifact boundary.

Failure Model

This chapter defines the intended boundary between hard failures, diagnostics, and validation findings.

The Three Outcome Classes

  1. hard operational failure
  2. successful analysis with diagnostics
  3. successful validation with findings

Consumer Rule

  • Err(...) means the requested operation itself failed
  • diagnostics mean the operation succeeded, but the returned analysis may be partial or lossy
  • validation findings mean the operation succeeded and produced evidence that the native surface does not match expectations cleanly

That means a robust downstream integration should not collapse everything into a single boolean “success” value.

Schema Version Review

This chapter records the current review decision for SCHEMA_VERSION.

Current Decision

The schema version remains:

SCHEMA_VERSION = 1

Why It Has Not Been Bumped Yet

The recent changes have mostly been additive fields, additive nested metadata, serde-defaultable structures, and richer evidence attached to existing top-level containers.

Current Review Standard

A future bump to 2 should happen when an existing field changes meaning in a way old consumers would misread, a representation changes in a non-defaultable way, or the project decides the current shape can no longer evolve safely within v1.

Stable Usage Patterns

This chapter describes the usage patterns that are most likely to remain stable for downstream library consumers.

Pattern 1: Prefer Root-Level Entry Points

Prefer HeaderConfig, probe_type_layouts, inspect_symbols, validate, validate_many, and analyze_source_package.

Pattern 2: Treat BindingPackage As The Primary Product

A typical flow is:

  1. analyze or bootstrap source input
  2. inspect package.diagnostics
  3. optionally attach or compare native evidence
  4. serialize the package only when crossing a tool boundary

Pattern 3: Treat Diagnostics As Contractual Data

Read package.diagnostics, classify which diagnostic kinds are blocking for your downstream generator, and make the decision explicit in your consumer.

Pattern 4: Gate Artifact Consumption On schema_version

Gate on schema_version, treat linc_version as provenance, and keep fixture coverage for the payload shapes you rely on.

Pattern 5: Preserve Native Metadata Instead Of Re-Deriving It

When package.link, package.layouts, or symbol inventories are available, prefer using them.

Pattern 6: Keep Validation Separate From Transport Failure

Execution failure should be handled as Err(...). Successful validation with mismatches should be handled as structured evidence.

Anti-Patterns

  • matching on exact free-form error strings
  • treating pretty JSON formatting as a semantic contract
  • assuming every successful scan is generation-ready
  • inferring link intent only from declarations when package-level link evidence already exists

LINC vs bindgen

This document compares two different approaches to C interop from the point of view of the current toolchain split.

The Short Version

  • bindgen is a header-to-Rust transpiler
  • LINC is an evidence engine

bindgen answers “what does this header say?” LINC answers “what does the source say, what does the artifact say, and do they agree?”

Parsing

bindgen depends on libclang and the Clang frontend. LINC keeps parsing and source extraction upstream of its own evidence layer and does not depend on libclang.

Internal Representation

bindgen’s IR is transient and internal to one code generation run. LINC’s BindingPackage is a durable, serialized evidence contract.

ABI Discovery

bindgen reads ABI information from libclang. LINC can attach compiler-probed layout evidence and keep that evidence alongside the rest of the analysis package.

Symbol Inspection And Validation

bindgen does not own native artifact inspection or validation. LINC does.

Code Generation

bindgen’s job ends in generated Rust. LINC’s job ends in evidence. A downstream tool such as gerc can consume LINC’s evidence and emit Rust or build metadata.

When To Use Which

Use bindgen when you want a direct header-to-Rust generator and are willing to pay the libclang cost. Use LINC when you want analysis, evidence, link metadata, validation, and downstream policy separation.

That is the real split:

  • bindgen centers immediate Rust emission
  • LINC centers analysis and evidence that another tool can consume later

End-To-End Workflows

This chapter ties the separate surfaces together into practical workflows.

Workflow 1: Analyze A Source Contract And Save JSON

#![allow(unused)]
fn main() {
use linc::{analyze_source_package, SourcePackage};

let analysis = analyze_source_package(&SourcePackage::default());
let json = serde_json::to_string_pretty(&analysis).unwrap();
std::fs::write("link-analysis.json", json).unwrap();
}

Workflow 2: Translate PARC Artifacts In Tests Or Harnesses

The intended cross-package architecture is artifact-based, not shared-type based. Library code should not import parc; translation belongs in tests, examples, or external harnesses.

Workflow 3: Inspect A Native Artifact

#![allow(unused)]
fn main() {
use linc::inspect_symbols;

let inventory = inspect_symbols("build/libdemo.so")?;
let json = serde_json::to_string_pretty(&inventory).unwrap();
std::fs::write("symbols.json", json).unwrap();
}

Workflow 4: Validate Source-Derived Bindings Against Artifacts

Validation currently compares a BindingPackage against one or more inventories. That lower-level path is still part of the crate surface, even if the preferred intake story starts from SourcePackage.

Use analysis.declared_link_surface and analysis.resolved_link_plan when a downstream tool only wants link names, artifact inputs, framework inputs, platform constraints, or ordering metadata.

Workflow 6: Downstream fol / gerc Consumption

  1. parc produces a source artifact
  2. tests/examples/harnesses translate that artifact into linc input
  3. linc produces LinkAnalysisPackage
  4. downstream generation reads source and link analysis in parallel

Workflow 7: Repo-Local Bootstrap

The raw-header bootstrap path exists for difficult fixtures and repository self-hosting. It is public, tested, and useful. It is just not the package boundary that new downstream tools should center first.

Workflow 8: ABI-Sensitive Packages

For packages with important struct ABI, attach layout evidence, inspect symbols, and validate before generation.

This is the right pattern when a downstream generator wants stronger evidence without turning LINC into the generator itself.

Operations And Release

This section covers the operational and release posture of LINC.

Operations

LINC is a library-first analysis tool. It is meant to be embedded, tested, and serialized, not launched as a separate end-user service.

Release

A release should be judged on build and test health, JSON contract stability, documentation alignment, fixture coverage, and platform support posture.

The practical release split is:

  • hermetic vendored baselines must stay green everywhere
  • host-dependent large evidence ladders should stay green where available
  • failure suites must keep proving conservative behavior instead of optimistic guessing

The grouped failure suites now live in:

  • failure_matrix_link for unresolved and duplicate provider outcomes
  • failure_matrix_validation for hidden, kind-mismatch, and ABI-mismatch validation states
  • failure_matrix_probe for invalid bootstrap config and probe unavailable-vs-failed separation

The architectural rule remains the same here too:

  • LINC owns evidence and analysis
  • downstream build and generation policy still belongs outside LINC
  • tests/examples/harnesses are where cross-package composition is proven

Platform Support

This chapter records the current practical platform-support posture of LINC.

Current Matrix

AreaLinux / ELFApple / Mach-OWindows / COFF
header scanningusableusable with Apple-specific link metadata supportlimited by missing Windows-native completion work
macro captureusableusablecompiler-dependent and not yet fully characterized
layout probingusable with GCC/Clang-style toolchainsusable with Clang-style toolchainsnot yet a completed support target
symbol inventoryusablepartial but presentpresent for COFF objects, import libraries, and PE binaries, but still not production-ready
validationusable where symbol inventory is usablepartiallimited; supported inventory classes are tested but the Windows linker model is still incomplete
link-surface metadatausableusable, including frameworkspartial, especially around Windows-native link forms

What “Usable” Means Here

Usable means the feature exists, it has direct test coverage in this repository, and it is reasonable for controlled internal use.

  • prefer Linux/ELF for the most mature end-to-end native validation path
  • treat Apple support as useful but still maturing
  • treat Windows-native linker/artifact support as incomplete

For the Level 1 production claim, interpret that as:

  • Linux/ELF = primary production platform
  • Apple/Mach-O = secondary confidence scope
  • Windows/COFF = tested but non-primary confidence scope

fol Integration Guide

This chapter describes the intended producer/consumer contract between LINC and fol.

Division Of Responsibility

  • LINC extracts declarations, metadata, diagnostics, layouts, and native evidence
  • fol consumes that evidence to generate bindings and apply policy

What fol Should Expect

  • BindingPackage as the primary declaration and metadata contract
  • BindingPackage.diagnostics as explicit extraction warnings and partial-fidelity signals
  • BindingPackage.layouts when ABI-sensitive types need compiler-probed evidence
  • BindingPackage.link as the normalized native dependency surface
  • SymbolInventory and ValidationReport when native artifact matching matters
  1. run LINC header scanning or source analysis
  2. inspect BindingPackage.diagnostics
  3. require layout probes for ABI-sensitive types
  4. inspect native artifacts and run validation when linkable symbols matter
  5. pass the resulting structured values to fol
  6. let fol decide what to generate, reject, or gate behind policy

Contract Boundaries

  • schema_version is the wire-compatibility gate
  • linc_version is producer provenance
  • BindingPackage is the declaration and metadata contract
  • ValidationReport is evidence, not an exception channel
  • diagnostics are part of the data contract, not incidental logs

Minimal Durable Contract

The shortest durable fol contract is:

  1. a serialized BindingPackage
  2. optional SymbolInventory values
  3. optional ValidationReport
  4. explicit consumer policy over diagnostics, layout evidence, and link evidence

Macro Semantics

MacroBinding is the normalized macro representation in the package.

Intended Semantics By Category

  • BindableConstant: safe candidates for generated constants
  • ConfigurationFlag: environment and availability signals
  • AbiAffecting: macros that may influence layout or calling behavior
  • Unsupported: capture and report, but do not assume a safe lowering path

Function-Like vs Object-Like

MacroForm preserves whether the macro was object-like or function-like. That distinction matters because function-like macros are often not safe to lower automatically.

Consumer Guidance

Consumers should treat macro evidence as policy input, not as a promise that every captured macro should become a generated constant.

Support Tiers

This chapter groups the platform and feature posture into rough tiers.

Tier Definitions

  • Tier 1: preferred and well-tested
  • Tier 2: supported but still maturing
  • Tier 3: experimental or incomplete

Current Tier Assignment

ELF-oriented flows are the strongest tier. Mach-O is useful but still maturing. Windows-native support is incomplete.

Downstream Guidance

Consumers should encode these tiers explicitly in their own release policy instead of assuming uniform maturity across platforms.

Unsupported Cases

This chapter keeps unsupported or incomplete areas explicit.

Native Artifact Formats

Windows-native artifact support is still incomplete compared with ELF and Mach-O.

ABI Modeling

LINC does not yet model every ABI detail for every record shape. Layout evidence is conservative and partial where needed.

Macro Semantics

Not every macro should be lowered automatically. Unsupported macros remain visible as evidence.

Validation Depth

Validation is evidence, not a full platform linker oracle.

Why This Chapter Exists

Unsupported cases should stay visible so downstream consumers can make policy choices explicitly.

fol Minimal Contract

This chapter defines the smallest durable contract that fol can rely on.

Minimal Required Inputs

  • a serialized BindingPackage
  • diagnostics

Minimal Required Semantics

The minimal contract lets fol inspect declarations and policy-gate on diagnostics.

What The Minimal Contract Does Not Promise

It does not promise layout evidence, link resolution, or validation.

fol Extended Contract

This chapter defines the optional evidence that makes fol more confident.

Extended Optional Inputs

  • layouts
  • link
  • SymbolInventory
  • ValidationReport
  • macros

Why This Contract Is Optional

Some generation tasks only need declarations. ABI-sensitive or publication quality workflows usually need more evidence.

Consumer Rule

Use the extended contract when the downstream decision really depends on layout, link, or validation evidence.

Cross-Repo Versioning

This chapter describes how producers and consumers should think about versioned artifacts across repository boundaries.

Artifact Keys

Use schema_version as the artifact gate and linc_version as provenance.

Coordination Rules

Cross-repo consumers should pin the artifact shape they understand and reject future shapes instead of guessing.

Additive Changes

Additive changes should be documented and fixture-tested.

Breaking Contract Changes

Breaking changes require explicit review and a schema bump when older consumers would misread the payload.

Reproducibility

This chapter describes what must be reproducible for LINC to be trustworthy.

Reproducibility Requirements

  • checked-in JSON contract fixtures must be deterministic
  • library-only unit tests should be deterministic without requiring internet access
  • toolchain-dependent tests should be explicit about their assumptions

Fixture Rules

Prefer checked-in headers, JSON payloads, and small native test artifacts where practical.

Contract Tests

The main contract tests should prove that source intake, validation, and link planning stay explainable and stable.

Link Resolution Boundary

This chapter defines the boundary between LINC link metadata and downstream build-system work.

What LINC Resolves Today

LINC preserves declared native link intent, normalized native link metadata, ordered inputs, requirement provenance, platform hints, symbol inventories, and validation evidence.

What LINC Does Not Resolve Today

LINC does not promise final linker invocation, full search-path expansion, or runtime loader behavior.

Practical Rule For Consumers

Treat BindingPackage.link as normalized requirement metadata and keep final linker invocation in downstream tooling.

Release Checklist

Use this checklist before cutting a release candidate.

Build And Test

  • run make build
  • run make test

Canonical Hardening Gates

  • confirm hermetic baselines still pass
    • vendored zlib
    • vendored libpng
    • plugin ABI
    • combined daemon fixture
  • confirm at least one host-dependent large-evidence ladder still passes where available
    • OpenSSL
    • Linux event-loop stack
  • confirm failure suites still reject duplicate, unresolved, hidden, decorated, and ABI-questionable cases conservatively
  • confirm plugin-style dl surfaces still produce explicit runtime-boundary notes instead of over-claiming runtime truth
  • confirm the hermetic ELF static, Mach-O framework, and Windows PE fixture suite still passes
  • confirm determinism anchors still hold on the canonical large surfaces

Contract Surfaces

  • confirm the documented JSON artifact shapes remain consumable by the current schema version
  • confirm ValidationReport fixture coverage still matches current structured fields

Documentation

  • confirm README wording matches tested behavior
  • confirm the book reflects current API entry points and platform scope

Consumer Boundary

  • confirm the generic library contract stays primary
  • confirm cross-package composition is still described as tests/examples/ harness work, not crate-to-crate library coupling

Release Decision

Do not treat “builds successfully” as sufficient. The code, docs, and fixtures all need to match the same boundary.

Hermeticity Matrix

This chapter turns the large LINC evidence suite into an explicit hermeticity ladder.

The central question is not just “does a test pass”. The central question is “what kind of evidence confidence does this surface buy us”.

Tier 1: Always-On Hermetic Baselines

These are the first confidence anchors and should remain green everywhere:

  • vendored zlib
  • vendored libpng
  • plugin ABI fixtures
  • combined daemon and max-pain fixtures
  • explicit ELF / Mach-O / Windows inventory confidence-floor fixtures

These surfaces prove that LINC can:

  • consume source-shaped input
  • derive declared link surface
  • resolve providers on controlled artifacts
  • emit stable evidence and validation products

Tier 2: Host-Dependent High-Value Ladders

These add confidence on real native environments when the libraries and headers exist:

  • OpenSSL
  • Linux event-loop stack
  • epoll and socketcan examples
  • other real system-library probes in the stress suites

These surfaces matter because they are closer to the real deployment problem than vendored toy cases.

Tier 3: Failure And Conservative-Evidence Surfaces

These prove that LINC is refusing or degrading honestly:

  • duplicate provider cases
  • unresolved provider cases
  • hidden or decorated symbol mismatches
  • ABI-questionable fixtures
  • partial or missing layout evidence
  • typed operational errors for unreadable artifacts, unsupported formats, and malformed serialized input
  • explicit Mach-O framework and dylib provider-policy checks

Those are release-positive tests when they stay:

  • deterministic
  • diagnostic
  • intentionally conservative

Determinism Anchors

The most important repeat-run anchors right now are:

  • vendored zlib
  • vendored libpng
  • combined daemon fixture
  • confidence-floor inventory fixtures
  • OpenSSL when available
  • Linux event-loop analysis

If any of those become unstable, the evidence story should be treated as weaker, even if many unit tests still pass.

Readiness Scorecard

This chapter summarizes current release readiness by subsystem and ties the score directly to the current hardening ladder.

Overall Readiness

LINC should currently be read as:

  • strong on hermetic evidence production
  • strong on ELF-first symbol and validation workflows
  • useful but more conservative on Mach-O and Windows import-library paths
  • meaningfully hardened on vendored and daemon-style fixtures
  • still dependent on host availability for the largest OpenSSL and Linux-system ladders

For whole-pipeline claims, this score is also capped by downstream gerc anchors that ingest linc evidence in tests/examples.

For Level 1 production, this score should be read as Linux/ELF-first. Apple and Windows readiness should raise confidence, not redefine the primary production envelope.

Subsystem Scorecard

  • source-shaped intake: high
  • JSON artifact stability: high
  • ABI layout evidence: medium-high
  • symbol inventories: high for ELF, medium-high for Mach-O, medium for Windows
  • validation: medium-high
  • link planning: medium-high
  • hermetic large-surface confidence: high
  • host-dependent large-surface confidence: medium-high
  • consumer integration on the documented artifact boundary: high

Canonical Readiness Anchors

The release posture should be judged against these anchors first:

  • vendored zlib
  • vendored libpng
  • plugin ABI fixture
  • combined daemon fixture
  • difficult-record evidence fixtures
  • OpenSSL when available
  • Linux event-loop analysis when available

If those anchors drift, the scorecard should drop even if many smaller unit tests still pass.

How To Read This Scorecard

High means the subsystem is a reliable contract surface for normal downstream use. Medium-high means consumers should still respect the documented limits and expect some host/platform asymmetry. Medium means the subsystem is useful but should not be oversold as equally mature across all supported environments.

Contract Change Checklist

Use this checklist whenever a release includes changes to schema, public API, or checked-in contract fixtures.

Schema Changes

  • confirm whether the change is additive, behavioral, or breaking
  • keep schema_version unchanged for additive/defaulted changes
  • bump schema_version only when older consumers can no longer deserialize or safely interpret the payload

Public API Changes

  • confirm whether the root-level API contract changed or only lower-level modules changed
  • update crate-level docs and book chapters when recommended usage changes

Fixture Changes

  • confirm the fixture still represents a real supported or intentionally unsupported scenario
  • confirm the corresponding regression test explains why the fixture exists

Consumer Guidance Changes

  • confirm generic library guidance stays separate from consumer-specific guidance
  • confirm consumer guidance remains an example profile rather than universal crate policy

Final Questions

Ask whether the change altered what downstream code can safely rely on, whether fixture coverage changed to prove the new boundary, and whether the docs now describe the same boundary.