The Technology Behind
Livestock Intelligence
Custom computer vision models, edge hardware under $100, patent-pending AI architecture, and a zero-compromise approach to data sovereignty. Here's how it all works.
Computer Vision
The Vision Pipeline
From camera input to identified animal in under 2 seconds. Every step purpose-built for the unique challenges of livestock brand recognition.
Image Capture & Preprocessing
The pipeline begins when a camera captures a livestock brand. We accept input from smartphone cameras, fixed CCTV installations, purpose-built camera mounts at crush points, or our dedicated edge device.
Normalisation
Colour correction, contrast enhancement, and noise reduction optimised for outdoor conditions — dust, rain, varying light.
Region Detection
Object detection model isolates the brand region from the full image. Trained on thousands of real-world brand photographs.
Angle Correction
Perspective transform corrects for camera angle, animal movement, and body curvature. Produces a flat, normalised brand image.
Brand Recognition & Feature Extraction
Our custom-trained convolutional neural network analyses the normalised brand image. Unlike generic OCR (which fails spectacularly on livestock brands), our model understands the unique visual language of Australian branding.
Degradation Handling
Brands fade, get scarred over, and grow hair. Our model is trained on brand images across the full degradation spectrum — from freshly applied to decades old.
Brand Grammar
Australian horse brands follow specific grammar rules — alphanumeric codes, position indicators (near/off side), over/under notation. Our model understands these rules natively.
Database Matching & Profile Assembly
The extracted brand features are matched against our indexed databases. For thoroughbreds, that's 43,000+ horses and 11,500+ brands from the Australian Stud Book. For cattle, state-level brand registries and NLIS cross-references.
Fuzzy Matching
Handles partial reads gracefully. A degraded "7 over J" still matches "7 over JM" with appropriate confidence scoring.
Profile Assembly
Match triggers full profile pull — name, lineage, ownership, registration, breeding history. Complete identification package.
Confidence Scoring
Every result includes a confidence score. High confidence = automatic match. Low confidence = flagged for human review. You decide the threshold.
Result Delivery & Logging
Results are returned in under 2 seconds — displayed on-screen, pushed via API, or logged for batch processing. Every scan is timestamped and stored for audit trails.
Multi-Channel Output
On-device display, mobile app, API webhook, CSV export, or direct integration with your existing management software.
Audit Trail
Every scan logged with timestamp, GPS coordinates, confidence score, operator ID, and original image. Full compliance trail.
Edge Computing
AI on a $100 Device
Our entire AI pipeline runs on a Raspberry Pi 5 with a Hailo-8L NPU. No cloud dependency, no internet required, no expensive proprietary hardware.
🧠 Hailo-8L Neural Processing Unit
13 TOPS (Tera Operations Per Second) of dedicated AI compute. Purpose-built for neural network inference. Runs our full vision pipeline at real-time speed without cloud latency.
📡 Zero Internet Dependency
The entire AI model, database, and processing pipeline runs locally. No internet required for scanning. Data syncs to cloud (optional) when WiFi or cellular is available. Critical for remote properties.
⚡ Under 15W Power Draw
Runs on USB-C power. Solar panel option for permanent installations in remote locations. Smaller power footprint than a phone charger — all day operation on a small battery.
🔧 No Proprietary Hardware
Standard Raspberry Pi. Standard camera modules. Standard NVMe storage. If something breaks, replace it from any electronics store. No vendor lock-in, no proprietary parts to wait for.
Cloud Processing — Also Available
Don't want to manage hardware? Our cloud processing option lets you scan brands via smartphone camera and get results from our Australian-hosted infrastructure. Same AI, no device to maintain. Best for low-volume users or evaluation.
Innovation
Patent-Pending AI Architecture
Our AI context preservation technology is patent-pending. We've developed a novel approach to maintaining model context across sessions and devices — enabling the AI to "remember" previous scans of the same animal and improve accuracy over time.
This isn't repackaged open-source. We're building genuinely new technology at the intersection of computer vision and agricultural traceability. Our approach to brand degradation modelling and temporal context accumulation doesn't exist anywhere else.
Technical Specifications
Privacy & Security
Data Sovereignty for Agriculture
Agricultural data is increasingly valuable — and increasingly targeted. Our architecture ensures your livestock data stays exactly where it should.
Australian Infrastructure
When cloud features are used, all data is processed on Australian infrastructure. No routing through overseas servers. No foreign data access. Compliant with Australian privacy legislation and agricultural data standards.
On-Premise Default
Edge devices process everything locally. Your data never leaves your property. The cloud is optional — for sync, backup, and multi-site management. You choose what gets shared and what stays private.
Encryption & Access Control
AES-256 encryption at rest. TLS 1.3 in transit. Role-based access control. Audit logging on every data access. You control who sees what, and you can delete everything at any time.
Integration
Built to Connect
RESTful API, webhooks, and pre-built integrations for the systems Australian agriculture already uses.
📡 REST API
# Scan a brand image
POST /api/v1/scan
# Response
{
"match": true,
"confidence": 0.984,
"horse": {
"name": "BLAZING TRAIL",
"brand": "7 over JM",
"asb_id": "204891",
"sire": "Fastnet Rock"
},
"scan_time_ms": 1342
}
Pre-Built Integrations
Webhook Events
Subscribe to real-time events:
Export Formats
See the Technology in Action
Book a technical demo. We'll walk you through the full pipeline — from camera input to identified animal — and answer any architecture questions.
Book a Technical Demo →