Projects
Tactics Journal
AI Agents & Infrastructure
Claw Ecosystem
Tools & Utilities
Personal Sites & Misc
Archive / Early Projects
Private (no description)
Infrastructure
Tactics Journal Research
Home: https://tacticsjournal.com/research
About page (to be moved to /research/about/): ~/code/tacticsjournal.com/research/research.md
- Railway — Production execution. Logged in as Kyle Boas (
railway whoami). Project: research, Environment: production. Services: ingest, detect, rescore, report, autoresearch_hourly, web (dashboard). All services point to the research repo and use PIPELINE_STEP to select which step to run.
- Railway CLI — Installed (
railway v4.35.0). Use railway status, railway logs --service <name>, railway run.
- Cloudflare AI Gateway — LLM routing for all pipeline calls. Configured via
CLOUDFLARE_GATEWAY_URL and CLOUDFLARE_GATEWAY_TOKEN.
- Cloudflare Dynamic Workers Gateway — Article fetches and YouTube transcript resolution. Configured via
DYNAMIC_WORKERS_HOST / DYNAMIC_WORKERS_SECRET. Worker defined in ~/research/cloudflare/dynamic-worker-gateway/.
- Postgres (pgvector) — Database for sources, embeddings, candidates, reports. On Railway.
- Paperclip — Agent operating system. API at
PAPERCLIP_API_URL. Web UI: https://paperclip-production-067b.up.railway.app/TAC/agents.
- GitHub — PR-based publishing workflow. Reports land as PRs for Kyle's review. Never auto-merge.
- Discord — Webhook alerts for new Detect trends and eval runs.
Models (config.json)
anthropic/claude-sonnet-4-6 — lead, synthesis, summary, revision
workers-ai/@cf/meta/llama-3.3-70b-instruct-fp8-fast — default model, signal, citation, eval
openai/text-embedding-3-small — embeddings
- No OpenRouter or other providers — all routed through Cloudflare AI Gateway
Pipeline Steps (run via make in ~/research)
ingest → backfill → detect → rescore → report
make step-ingest — Pull RSS feeds + YouTube transcripts into DB
make step-backfill — Fill gaps in source content embeddings
make step-detect — Find emerging tactical trends via novelty scoring
make step-rescore — Recalculate novelty scores with latest data
make step-report — Generate structured research reports from top candidates
make dashboard — Start local dashboard at http://localhost:8080/
Autoresearch (Karpathy-style experiment loop)
Canonical loop per stage:
python autoresearch/<stage>/prepare.py # freeze benchmark
python autoresearch/<stage>/train.py # edit mutable surface, run, keep improvements
Stages: ingest, detect, report
Production eval/tuning:
make eval-report — Offline report-quality evaluation
make optimize-ingest-policy — Tune ingest policy (no-LLM heuristics)
make benchmark-report — Benchmark report policy on recent reports
make optimize-report-policy — Search/apply best report policy
make autoresearch-daily — Full no-LLM hourly policy loop
Reference: karpathy/autoresearch and autoresearch-anything (~/code/autoresearch-anything/)
Key Files in ~/research
AGENTS.md — Full agent instructions, team roles, budget, morning briefing format
README.md — Pipeline documentation, architecture, CLI usage
main.py — End-to-end pipeline logic
server.py — Dashboard + step runner API
detect_detectors.py — Frontier-gap candidate generation
detect_policy_config.json — Detect sensitivity and score gates
report_policy_config.json — Report quality gates and generation config
config.json — Model selection defaults
railway.toml — Railway service layout and cron schedules
env.example — All environment variables
feeds/rss.md — RSS feed list
feeds/youtube.md — YouTube channel list