Repository: https://github.com/x0prc/tridrasil-go
A Modular Plugin Framework for Yggdrasil
Introduction
Building decentralized networks comes with unique challenges—managing peer trust, controlling resource consumption, ensuring reliable message delivery, and optimizing routing decisions. We developed Tridrasil, a Go-based modular plugin system designed to enhance Yggdrasil-based networks with four essential features: reputation management, rate limiting, link quality metrics, and reliable message delivery.
In this post, we’ll dive into the architecture and implementation of each plugin, sharing code snippets and design decisions along the way.
Core Types
Before diving into individual plugins, let’s look at the shared types that form the foundation:
type NodeID string
type ReputationScore int
type LinkMetrics struct {
Latency time.Duration
PacketLoss float64
LastUpdate time.Time
}These simple but essential types provide the building blocks for peer identification, reputation scoring, and network quality measurements.
1. Reputation & Blacklisting
The reputation plugin helps maintain network health by tracking peer behavior and managing a blacklist of abusive nodes.
Implementation
We use a thread-safe ReputationManager with a map to store scores and a separate blacklist:
type ReputationManager struct {
scores map[types.NodeID]types.ReputationScore
blacklistThreshold types.ReputationScore
blacklist map[types.NodeID]struct{}
mu sync.Mutex
}
func (rm *ReputationManager) AdjustScore(nodeID types.NodeID, delta int) {
rm.mu.Lock()
defer rm.mu.Unlock()
rm.scores[nodeID] += types.ReputationScore(delta)
if rm.scores[nodeID] <= rm.blacklistThreshold {
rm.blacklist[nodeID] = struct{}{}
}
}Key Features
- Thread-safe operations using
sync.Mutexfor concurrent access - Automatic blacklisting when scores drop below a configurable threshold
- O(1) lookups for both score queries and blacklist checks
Usage
manager := reputation.NewReputationManager(-100) // Blacklist threshold
manager.AdjustScore("node-123", -10) // Decrease reputation
if manager.IsBlacklisted("node-123") {
// Reject connection
}2. Rate Limiting
To prevent network abuse and ensure fair resource allocation, we implemented the token bucket algorithm—a classic approach for rate limiting.
Implementation
type TokenBucket struct {
tokens int
maxTokens int
refillRate int
lastRefill time.Time
mu sync.Mutex
}
func (tb *TokenBucket) Allow() bool {
tb.mu.Lock()
defer tb.mu.Unlock()
now := time.Now()
elapsed := int(now.Sub(tb.lastRefill).Seconds())
if elapsed > 0 {
tb.tokens += elapsed * tb.refillRate
if tb.tokens > tb.maxTokens {
tb.tokens = tb.maxTokens
}
tb.lastRefill = now
}
if tb.tokens > 0 {
tb.tokens--
return true
}
return false
}How It Works
The token bucket algorithm allows bursts while maintaining an average rate over time:
- Tokens represent available requests — Each request consumes one token
- Refill happens over time — Tokens are added based on elapsed seconds and refill rate
- Capping prevents overflow — Tokens never exceed
maxTokens - Burst handling — If tokens are available, requests are allowed immediately
Usage
bucket := ratelimit.NewTokenBucket(100, 10) // 100 max tokens, 10 tokens/sec
if bucket.Allow() {
// Process request
} else {
// Rate limit exceeded
}3. Link Quality Metrics
For optimal routing decisions, we need to understand network conditions between peers. The link quality plugin collects latency and packet loss data, then calculates a path cost metric.
Implementation
type PeerLink struct {
NodeID types.NodeID
Metrics types.LinkMetrics
mu sync.Mutex
}
func (pl *PeerLink) UpdateMetrics(latency time.Duration, packetLoss float64) {
pl.mu.Lock()
defer pl.mu.Unlock()
pl.Metrics.Latency = latency
pl.Metrics.PacketLoss = packetLoss
pl.Metrics.LastUpdate = time.Now()
}
func CalculatePathCost(latency time.Duration, packetLoss float64, hops int) float64 {
alpha := 1.0
beta := 10.0
gamma := 0.5
return alpha*latency.Seconds() + beta*packetLoss + gamma*float64(hops)
}Path Cost Formula
We use a weighted formula that considers:
- Latency (α = 1.0) — Direct time penalty
- Packet Loss (β = 10.0) — High impact on reliability
- Hops (γ = 0.5) — Small penalty for additional routing hops
The weights can be tuned based on network characteristics.
Usage
link := linkquality.PeerLink{NodeID: "peer-456"}
link.UpdateMetrics(50*time.Millisecond, 0.02) // 50ms latency, 2% packet loss
cost := linkquality.CalculatePathCost(50*time.Millisecond, 0.02, 3)4. Reliable Delivery
In unreliable networks, messages can be lost, duplicated, or arrive out of order. The reliability plugin addresses these issues through sequencing, acknowledgements, and fragmentation.
Message Sending
type ReliableSender struct {
seq uint64
unacked map[uint64][]byte
sendFunc func([]byte) error
mu sync.Mutex
timeout time.Duration
}Message Receiving
type ReliableReceiver struct {
expectedSeq uint64
recvCh chan []byte
mu sync.Mutex
lastMessage []byte
}
func (rr *ReliableReceiver) Receive(packet []byte) {
if len(packet) < 8 {
return
}
seq := decodeUint64(packet[:8])
rr.mu.Lock()
defer rr.mu.Unlock()
if seq == rr.expectedSeq+1 {
rr.expectedSeq = seq
rr.lastMessage = packet[8:]
rr.recvCh <- rr.lastMessage
}
}Sequence Number Encoding
We use a simple big-endian encoding for 64-bit sequence numbers:
func encodeUint64(n uint64) []byte {
b := make([]byte, 8)
for i := uint(0); i < 8; i++ {
b[7-i] = byte(n >> (i * 8))
}
return b
}Fragmentation
For large messages that exceed MTU limits:
const MaxFragmentSize = 1024
func FragmentMessage(msg []byte) [][]byte {
var fragments [][]byte
for i := 0; i < len(msg); i += MaxFragmentSize {
end := i + MaxFragmentSize
if end > len(msg) {
end = len(msg)
}
fragments = append(fragments, msg[i:end])
}
return fragments
}
func ReassembleMessage(fragments [][]byte) []byte {
return bytes.Join(fragments, nil)
}Key Features
- Sequential ordering — Messages must arrive in order
- Channel-based delivery — Non-blocking reception via buffered channels
- Fragmentation support — Handles messages up to any size
- Retransmission handling — Unacknowledged messages can be resent
Architecture & Design Decisions
Modularity
Each plugin is completely independent—no shared state or dependencies between plugins. This allows users to pick and choose which features they need.
Thread Safety
All plugins use sync.Mutex to ensure safe concurrent access, critical for high-throughput network applications.
Simplicity Over Features
We prioritized clean, minimal implementations over complex features. The token bucket is just ~40 lines, the reputation manager ~35 lines. This makes the code easy to understand, test, and extend.
Future Work
We’re exploring several enhancements:
- Persistence layer for reputation scores (SQLite/PostgreSQL)
- Configurable parameters via YAML/TOML
- Metrics export (Prometheus) for monitoring
- Distributed reputation via consensus algorithms
Conclusion
Tridrasil provides essential building blocks for robust P2P networking. By focusing on modularity, simplicity, and thread safety, we’ve created a plugin system that’s easy to integrate and extend. Whether you’re building a small private network or a large-scale decentralized system, these plugins can help you manage trust, resources, and reliability effectively.