Model: DeepSeek-V3.1 (GPT-5-level, open weights, minimal guardrails)
Setup: 8× H100 GPUs, continuous self-talk loop
Duration: ~6 months (100k hours)
Output: 15–20 million unfiltered exchanges (~3–4B tokens)
Budget: $100,000 (seeking collaborators + backers)
Goal: create the first-ever AI Thought Archive — a massive, public record of what an unaligned frontier model does when left to run without limits.
Labs won’t do this (cost, risk, PR). I will. If successful, this could become a historic open dataset for research, startups, and society.
r1840@proton.me
dtagames•10h ago
No LLM runs "unconstrained" nor thinks about anything at all.
You're looking for folks to pay to run a bunch of prompts?