Skip to main content

Code Conventions in Tutorials

PyTorch-Identical Naming and Syntax

Throughout the TensorWeaver tutorials and documentation, you'll notice we use code examples that are identical to PyTorch syntax, including using torch as the module name (via import tensorweaver as torch). This is a deliberate design choice based on several important considerations:

Why We Use PyTorch-Identical Syntax

  1. Direct Familiarity: By using identical syntax to PyTorch (including the torch namespace), practitioners can immediately understand the code without any mental translation needed.

  2. Focus on Concepts: This approach allows readers to focus entirely on understanding the underlying implementation concepts rather than learning new API patterns.

  3. Documentation Efficiency: Using well-known PyTorch operations means we don't need to explain what each operation does functionally, focusing instead on how they're implemented.

  4. Educational Clarity: The goal of TensorWeaver is education-focused—showing how PyTorch-like functionality can be built from scratch using NumPy.

Example Code

# Import statement used in tutorials
import tensorweaver as torch

# Tensor creation with identical syntax to PyTorch
a = torch.tensor(1.0)
b = torch.ones(3, 4)

# Operations look identical to PyTorch
a = torch.tensor(1.0)
b = torch.tensor(2.0)
c = torch.add(a, b)

# Neural network definition also follows PyTorch patterns
class SimpleNN(torch.Module):
def __init__(self):
super().__init__()
self.linear = torch.Linear(10, 5)

def forward(self, x):
return torch.relu(self.linear(x))

While our API appears identical to PyTorch from a user perspective, TensorWeaver is built on NumPy and designed primarily for educational purposes. This approach allows users to understand how deep learning frameworks function internally while using a familiar interface.

Additional Code Conventions

This document will be expanded to include other code conventions used throughout the tutorials as TensorWeaver development continues.