AI Training is an Optimization Problem

image

Solving Optimization with Analog Computing

image

What Vellex Enables

Vellex makes on-device AI training practical for the first time. Not inference. Not running pre-trained models. Actual training, on hardware that runs on milliwatts.

This means edge devices that learn continuously from their own data, adapt to changing conditions in real time, and improve the longer they're deployed. No cloud retraining. No firmware update cycles. No data leaving the device.

Why This Was  Previously Impossible

Today’s optimizers (Adam, SGD) rely on power-hungry, iterative gradient steps that make on-device training impossible. By solving the energy cost of the math itself—rather than chasing processor speed—we’ve brought data-center intelligence to the coin-cell limit.

How It Works

image

Think of it as the difference between searching every point on a landscape to find the lowest valley, versus placing a ball on the surface and letting gravity do the work. One costs millions of compute cycles. The other is nearly instantaneous and costs almost no energy.



Iterative search: Calculate → Check → Repeat
Millions of steps | Watts of power.


Physics-based optimization: the system settles at the solution naturally.
One step | Milliwatts.

The result: model training that converges in microseconds at milliwatt power, rather than hours at hundreds of watts.

Vellex maps the mathematics of AI training directly onto an analog circuit. The energy landscape of the circuit is programmed to mirror the optimization problem. The circuit then does what physics always does: it settles at its lowest energy state. That minimum is the optimal solution.

Built for Integration

Vellex is designed as a complement to your existing AI development stack, not a replacement. Standard cores like ARM or RISC-V excel at inference, control logic, and matrix multiplication.

Vellex handles what they were never built to do efficiently: the iterative optimization at the heart of training. Together, they form a hybrid architecture that delivers the programmability of digital with the energy efficiency of physics.

Engineers work in standard ML frameworks like PyTorch, JAX, or TensorFlow. Our Code-to-Circuit compiler maps their models to Vellex Core automatically. The IP Core is delivered as a licensable GDSII hard macro, compatible with standard semiconductor manufacturing processes, and designed to sit alongside ARM and RISC-V cores.

No analog expertise required.

Grounded in Research

Technical Advantages

image
  • Milliwatt-Scale Training

    Vellex collapses millions of iterative compute cycles into a single physical event, reducing the energy cost of AI training by orders of magnitude. Devices that could never afford to train, from wireless sensors to battery-powered drones, can now learn continuously without draining their power budget.

  • usecond Convergence

    Model parameters reach their optimal state through physical relaxation rather than sequential gradient updates. Training that takes hours on a GPU completes in microseconds on Vellex hardware. In mission-critical environments, your AI adapts in real time, not after the next cloud sync.

  • No Cloud Dependency

    Training happens entirely on-device, ensuring no sensitive data ever leaves the hardware. With no bandwidth overhead, retraining queues, or round-trips to a distant data center, this eliminates the latency and privacy exposure inherent in traditional, cloud-based machine learning training pipelines.

  • Compound Intelligence

    On-device training means your models don't just deploy; they constantly and autonomously improve. The longer a device operates, the better tuned it becomes to its specific local environment, allowing for precise, real-time optimization without ever needing to rely on external, cloud-based training updates.

The Vellex Product Platform

image

From algorithm to silicon, one integrated stack.

Vellex gives engineering teams a clear path from software evaluation to full silicon deployment. Our Code-to-Circuit compiler accepts models from standard ML frameworks, including PyTorch, JAX, and TensorFlow, and maps them to Vellex IP automatically. The physics is completely abstracted. Engineers write the code they already know.

  • image

    Vellex Train

    A physics-inspired optimization algorithm for AI training that runs entirely in software. Validate the Vellex approach on your current infrastructure today, and migrate to Vellex silicon as your performance demands grow. Available as a free API for researchers, students, and engineers.

  • image

    Vellex Board

    A deployment-ready AI accelerator board for real-world edge environments. Plug-and-play hardware abstraction lets engineers train and execute AI workloads in the field in hours, not months, without touching a circuit diagram. Built for teams validating adaptive intelligence in mission-critical conditions.








  • image

    Vellex  Core

    A licensable, programmable GDSII analog co-processor compatible with standard manufacturing processes. Sits alongside ARM or RISC-V cores and offloads the mathematical optimization required for AI training into the analog domain, delivering GPU-class training capability at milliwatt power. Available as licensable IP, a chiplet, or a standalone chip.

Go Deeper

image

Ready to explore the technology behind milliwatt-scale AI training?