Research
Models & Research
Domain-specific AI models built on proprietary GPU infrastructure using quantization, fine-tuning, and distillation — for high-precision tasks across scientific, energy, and legal domains.
Core Model Engineering Techniques
Quantization
Reduces model weight precision from FP32 to INT8 or INT4 (e.g. Q4_K_M, Q8_0), shrinking memory footprint by 4–8× and enabling large models — 7B to 70B parameters — to run on consumer or prosumer GPUs without meaningful quality loss. Every model in this portfolio runs quantized on local infrastructure, with zero cloud dependency.
Fine-Tuning (LoRA / QLoRA)
Instead of training from scratch, a pre-trained foundation model is adapted to a specific domain by updating a small set of low-rank weight matrices (LoRA). QLoRA combines quantization with LoRA, making it possible to fine-tune 13B–34B models on a single GPU. The result is a domain expert that retains the reasoning depth of the base model while mastering domain-specific vocabulary, patterns, and logic.
Distillation
A large, high-accuracy teacher model generates labeled outputs that are used to train a smaller student model. The student learns to replicate the teacher's behavior at a fraction of the compute and memory cost. This technique is key to deploying production-grade AI on constrained hardware — achieving near-frontier accuracy with models that are 10–20× cheaper to run.

Seismic Interpretation Model
Energy & Mining — Subsurface Geological Analysis
A fine-tuned model trained on seismic survey datasets to interpret subsurface geological formations, classify stratigraphic layers, and support reservoir characterization for energy and mining clients. The base foundation model was quantized to Q4_K_M and adapted via LoRA on proprietary seismic interpretation data, enabling fully local deployment on GPU infrastructure — no seismic data ever leaves the client's environment. Reduces interpretation turnaround from weeks to hours.
- → Stratigraphic layer classification
- → Seismic attribute analysis
- → Reservoir feature extraction
- → Anomaly and fault detection

DNA Analysis Model
Biotech — Scientific Genomic Study
Applied Genbio open-source genomic foundation models to conduct a scientific DNA analysis study for a biotech company under advisory. The pipeline ran on the client's own on-premise infrastructure (8× H100 GPUs), fine-tuning the base model on proprietary genomic datasets and processing complex variant data to surface actionable insights for research teams. A controlled fine-tuning strategy preserved the model's biological priors while specializing it for the client's target markers.
- → Genomic data processing
- → Genetic variant interpretation
- → Biomarker identification
- → Research workflow support

Jurisprudence Models
Colombian & Mexican Constitutional Law
Custom models trained on Colombian and Mexican constitutional jurisprudence using QLoRA fine-tuning over a quantized base. A distillation step compressed reasoning patterns from a larger teacher model into a leaner production model, keeping inference fast. These models power the LeyIA system, enabling semantic search and legal reasoning across thousands of constitutional precedents with response times suitable for real-time use.
- → Constitutional precedent retrieval
- → Semantic legal reasoning
- → Cross-jurisdiction comparison
- → Case relevance scoring

Executive Claims Model
Executive Claims — Colombian Legal Procedures
A specialized model fine-tuned to analyze Colombian executive claims, extract key procedural elements, classify case stages, and detect risks. Trained on a curated corpus of legal documents with LoRA adapters over a quantized base model, it runs on local GPU infrastructure with full data privacy. Reduces manual review time dramatically while maintaining the precision required for legal workflows.
- → Automatic extraction of claim data
- → Procedural stage classification
- → Risk flag detection
- → Automated report generation