A modern educational deep learning framework for students, engineers and researchers
TensorWeaver is designed specifically for students, engineers and researchers who want to understand how deep learning frameworks work under the hood. Unlike industrial frameworks like PyTorch and TensorFlow that prioritize performance and scalability, TensorWeaver focuses on clarity, readability, and simplicity.
Built entirely in Python with only NumPy as a dependency, TensorWeaver's codebase is transparent and approachable, making it an ideal learning resource for those who want to demystify the "magic" behind modern AI frameworks.
Built for learning, understanding, and demystifying the "magic" behind modern deep learning frameworks
Clear, readable code with comprehensive documentation designed for students, engineers and researchers.
Built with familiar tools to ensure easy understanding and modification.
Familiar interface that makes transitioning to industrial frameworks seamless.
Essential capabilities for real-world deep learning applications.
Start using TensorWeaver in minutes
Install TensorWeaver easily using pip:
pip install tensorweaver
That's it! TensorWeaver only requires NumPy as a dependency.
Access the full source code on GitHub:
View on GitHubExplore the implementation, contribute, or simply learn from browsing the code.
Check out our comprehensive documentation for tutorials and examples.
Understand deep learning frameworks by implementing core components from scratch
import tensorweaver as tw
import numpy as np
# Define a custom addition operator - it's that simple!
class CustomAdd(tw.Function):
def forward(self, x, y):
return x + y
def backward(self, grad_output):
# Gradient of addition flows to both inputs
return grad_output, grad_output
# Try it out!
x = tw.Tensor([1.0, 2.0])
y = tw.Tensor([0.1, 0.2])
z = CustomAdd()(x, y)
print("Result:", z.data) # [1.1, 2.2]
import tensorweaver as tw
# Create a computation graph
x = tw.Tensor([2.0], requires_grad=True)
y = x * x + 2 * x # x² + 2x
# Compute gradients
y.backward()
# dy/dx = 2x + 2
print("Gradient:", x.grad) # [6.0]
import tensorweaver as tw
import numpy as np
class Linear(tw.Module):
def __init__(self, in_features, out_features):
self.weight = tw.Parameter(
np.random.randn(in_features, out_features) * 0.01
)
self.bias = tw.Parameter(np.zeros(out_features))
def forward(self, x):
return x @ self.weight + self.bias
# Create and use a linear layer
layer = Linear(2, 3)
x = tw.Tensor([[1.0, 2.0]])
output = layer(x)
Start with simple operators and gradually explore more complex implementations. Our step-by-step guides help you understand every concept clearly.
Have questions, suggestions, or want to contribute to TensorWeaver? I'd love to hear from you!