Machine Learning for Singularities

Signed Common Meadows make division by zero algebraic. Train with smooth projective tuples, deploy with strict inference. Track singularities with ⊥ and sign information.

Train on Smooth. Infer on Strict.

Always‑defined math for edge cases anywhere.

Signed Common Meadows replace epsilon hacks so any model stays stable when denominators vanish or targets blow up.

Totalized Arithmetic

Division by zero yields ⊥ with sign tracking, not exceptions. Clean semantics, deterministic propagation.

How it works

Projective Training

Train with smooth ⟨N,D⟩ tuples and ghost gradients. Stable optimization through singularities.

See details

Strict Inference

Deploy with configurable thresholds and explicit masks. Know when your model hits a singularity.

Get started

Rational Inductive Bias

Learn P(x)/Q(x) instead of piecewise-linear approximations. Better extrapolation, correct asymptotics.

Get started

Built on Solid Mathematical Foundations

ZeroProofML extends PyTorch with Signed Common Meadow semantics, based on the pioneering work of Bergstra and Tucker on meadow algebra. Singularities become first-class algebraic states tracked by ⊥ and sign operators. Train with smooth projective tuples ⟨N,D⟩ and ghost gradients, deploy with strict inference and configurable thresholds. Use it anywhere division by zero matters: robotics, physics simulations, spectral analysis, financial models, or scientific computing.

Read the docs
about image
about image
ZeroProofML logoSCM-firstArithmetic, not patches

Ready to build with totalized arithmetic? Start with ZeroProofML

Drop-in PyTorch layers for rational neural networks. SCM-aware losses and training utilities. Strict inference with safety masks. Start modeling singularities today.

Github
FAQ

Frequently Asked Questions

Learn about Signed Common Meadows and how ZeroProofML handles singularities.

What are Signed Common Meadows?

An algebraic structure where division is totalized: 1/0 yields an absorptive bottom element ⊥, not an exception. We layer a sign operator on top to track orientation near singularities. Based on Bergstra and Tucker's meadow algebra.

How does projective training work?

We represent values as homogeneous tuples ⟨N,D⟩ and use detached renormalization with stop_gradient to create ghost gradients. This keeps optimization smooth even when denominators approach zero.

What is strict inference?

At deployment, we decode projective outputs with configurable thresholds (τ_infer, τ_train) and return explicit bottom_mask and gap_mask for safety. You know exactly when your model encounters a singularity.

Why rational neural networks?

Rational functions P(x)/Q(x) are a better inductive bias for phenomena with poles and asymptotes. Unlike ReLU networks that extrapolate linearly, rational functions can model super-linear growth and spectral resonances.

Is this PyTorch compatible?

Yes. ZeroProofML provides drop-in layers, SCM-aware losses, and training utilities that integrate with standard PyTorch workflows. Operations are JIT-compatible for performance.

Where should I use this?

Anywhere division by zero or singularities matter: robotics (inverse kinematics), physics simulations, spectral analysis, financial models, scientific computing, or any domain where you need deterministic behavior near poles.

Founder

Meet Us

Zsolt Döme

Zsolt Döme

Data scientist, ML Researcher

Latest Blog

From The Blog

blog cover image
07 Jan 2026

Ghost Molecules: Why Neural Networks Fail at Atomic Repulsion

Standard MLPs create 'Soft Walls' that allow atoms to pass through each other. Here is how we built a 'Hard Wall' with much better physics.

CONTACT

Get in touch

Location

Budapest, Hungary

Email

hello@zeroproofml.com

dome@zeroproofml.com

Send us a Message