A reverse-mode automatic differentiation engine with neural networks, built in C++ with Python bindings.
AutoNeuroNet is a fully implemented automatic differentiation engine with custom matrices, a full neural network architecture, and a training pipeline. It comes with Python bindings via PyBind11, enabling quick, easy network development in Python, backed by C++ for enhanced speed and performance.
- Features
- Installation
- Quickstart
- Project Structure
- Building from Source
- Demos
- API Overview
- Documentation
- References
- License
- Reverse-Mode Automatic Differentiation - Scalar-level AD with full computation graph and backpropagation
- Custom Matrix Library - 2D differentiable matrices with element-wise and matrix operations
- Neural Network Layers -
Linear,ReLU,LeakyReLU,Sigmoid,Tanh,SiLU,ELU,Softmax - Loss Functions -
MSELoss,MAELoss,BCELoss,CrossEntropyLoss,CrossEntropyLossWithLogits - Optimizers -
GradientDescent,SGD(with momentum),Adagrad,RMSProp,Adam,AdamW - Weight Initialization - Kaiming (He) and Xavier (Glorot) initialization
- Model Persistence - Save and load trained model weights
- NumPy Interop - Convert between NumPy arrays and AutoNeuroNet matrices
- Python Bindings - Full C++ performance accessible from Python via PyBind11
- Cross-Platform - Builds on Linux, macOS, and Windows (Python 3.9 - 3.13)
Install from PyPI:
pip install autoneuronetTo include dependencies for running the demos:
pip install autoneuronet[demo]import autoneuronet as ann
x = ann.Var(2.0)
y = x**2 + x * 3.0 + 1.0
# Set the final gradient to 1.0 and perform backpropagation
y.setGrad(1.0)
y.backward()
print(f"y: {y.val}") # 11.0 = (2)^2 + 3(2) + 1
print(f"dy/dx: {x.grad}") # 7.0 = 2(2) + 3import autoneuronet as ann
X = ann.Matrix(10, 1) # shape: (10, 1)
y = ann.Matrix(10, 1) # shape: (10, 1)
for i in range(10):
X[i, 0] = ann.Var(i)
y[i, 0] = 5.0 * i + 3.0 # y = 5x + 3import autoneuronet as ann
X = ann.Matrix(2, 2)
X[0] = [1.0, 2.0]
X[1] = [3.0, 4.0]
Y = ann.Matrix(2, 2)
Y[0] = [5.0, 6.0]
Y[1] = [7.0, 8.0]
Z = X @ Y # or ann.matmul(X, Y)
print(Z)
# Output:
# Matrix(2 x 2) =
# 19.000000 22.000000
# 43.000000 50.000000import autoneuronet as ann
import numpy as np
x = np.array([[1.0, 2.0], [3.0, 4.0]])
X = ann.numpy_to_matrix(x)
print(X)
# Output:
# Matrix(2 x 2) =
# 1.000000 2.000000
# 3.000000 4.000000import autoneuronet as ann
model = ann.NeuralNetwork(
[
ann.Linear(784, 256, init="kaiming"),
ann.ReLU(),
ann.Linear(256, 128, init="kaiming"),
ann.ReLU(),
ann.Linear(128, 10, init="kaiming"),
ann.Softmax(),
]
)
optimizer = ann.SGDOptimizer(
learning_rate=1e-2, model=model, momentum=0.9, weight_decay=1e-4
)
print(model)loss = ann.MSELoss(labels, logits)
loss.setGrad(1.0)
loss.backward()
optimizer.optimize()
optimizer.resetGrad()
print(f"Loss: {loss.getVal()}")AutoNeuroNet/
├── include/ # C++ header files
│ ├── Var.hpp # Scalar automatic differentiation
│ ├── Matrix.hpp # 2D differentiable matrix
│ ├── NeuralNetwork.hpp # Layer abstractions and network container
│ ├── Optimizers.hpp # Optimizer algorithms
│ └── LossFunctions.hpp # Loss function implementations
├── src/ # C++ implementation files
│ ├── Var.cpp
│ ├── Matrix.cpp
│ ├── NeuralNetwork.cpp
│ ├── Optimizers.cpp
│ └── LossFunctions.cpp
├── python/autoneuronet/ # Python package
│ ├── __init__.py # Re-exports C++ bindings
│ └── __init__.pyi # Type stubs for IDE support
├── demos/ # Example scripts and notebooks
│ ├── automatic_differentiation.cpp
│ ├── numeric_differentiation.cpp
│ ├── linear_regression.cpp
│ ├── linear_regression_demo.ipynb
│ ├── moons_classification_demo.ipynb
│ ├── mnist_demo.ipynb
│ └── gradient_descent_3d.py
├── docs/ # Documentation source (MkDocs)
│ ├── index.md
│ ├── install.md
│ ├── quickstart.md
│ └── api.md
├── .github/workflows/ # CI/CD
│ └── wheels.yml # Cross-platform wheel builds
├── pybind_wrapper.cpp # PyBind11 binding definitions
├── CMakeLists.txt # CMake build configuration
├── pyproject.toml # Python package metadata
├── mkdocs.yml # Documentation site config
├── requirements.txt # Development dependencies
└── LICENSE # Apache License 2.0
- C++17 compatible compiler
- CMake >= 3.20
- Python >= 3.9
- pybind11
git clone https://github.com/RishabSA/AutoNeuroNet.git
cd AutoNeuroNetBuild the Python package locally:
pip install .Build the C++ library directly:
cmake -S . -B build -DCMAKE_BUILD_TYPE=Release
cmake --build build# Automatic differentiation example
g++ demos/automatic_differentiation.cpp src/Var.cpp -I include -o demos/automatic_differentiation
# Linear regression example
g++ demos/linear_regression.cpp src/Var.cpp src/Matrix.cpp src/NeuralNetwork.cpp src/Optimizers.cpp src/LossFunctions.cpp -I include -o demos/linear_regression| Demo | Description | Type |
|---|---|---|
| MNIST Classification | Handwritten digit recognition on MNIST | Jupyter Notebook |
| Moons Classification | Binary classification on sklearn moons dataset | Jupyter Notebook |
| Linear Regression | Simple linear regression walkthrough | Jupyter Notebook |
| 3D Gradient Descent | 3D visualization of gradient descent | Python Script |
| Automatic Differentiation | Scalar AD basics | C++ |
| Numeric Differentiation | Numeric vs automatic differentiation comparison | C++ |
| Linear Regression (C++) | Full training pipeline in C++ | C++ |
| Class | Description |
|---|---|
Var |
Differentiable scalar with reverse-mode AD. Supports arithmetic, trig, log, exp, and activation functions. |
Matrix |
2D container of Var objects. Supports element-wise ops, matrix multiplication (@), and activations. |
| Layer | Description |
|---|---|
Linear(in, out, init) |
Fully connected layer. init: "kaiming" or "xavier". |
ReLU |
Rectified Linear Unit |
LeakyReLU(alpha) |
Leaky ReLU with configurable negative slope |
Sigmoid |
Sigmoid activation |
Tanh |
Hyperbolic tangent |
SiLU |
Sigmoid Linear Unit (Swish) |
ELU(alpha) |
Exponential Linear Unit |
Softmax |
Softmax normalization |
| Function | Use Case |
|---|---|
MSELoss |
Regression |
MAELoss |
Regression |
BCELoss |
Binary classification |
CrossEntropyLoss |
Multi-class classification (with probabilities) |
CrossEntropyLossWithLogits |
Multi-class classification (with raw logits) |
| Optimizer | Key Parameters |
|---|---|
GradientDescentOptimizer |
learning_rate |
SGDOptimizer |
learning_rate, momentum, weight_decay |
AdagradOptimizer |
learning_rate, epsilon |
RMSPropOptimizer |
learning_rate, decay_rate, epsilon |
AdamOptimizer |
learning_rate, beta1, beta2, epsilon |
AdamWOptimizer |
learning_rate, beta1, beta2, epsilon, weight_decay |
| Function | Description |
|---|---|
matmul(A, B) |
Matrix multiplication (also available as A @ B) |
numpy_to_matrix(arr) |
Convert a NumPy array to an AutoNeuroNet Matrix |
For the full API reference, see the documentation.
Full documentation is available at rishabsa.github.io/AutoNeuroNet.
