You need to agree to share your contact information to access this dataset

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this dataset content.

Dataset Summary

This dataset is a closed-world synthetic corpus for measuring knowledge injection via continued pre-training (CPT). It contains fictional materials, processes, facilities, and regulations rendered into documents (training) and multiple-choice questions (eval). All questions have deterministic ground truth derived from a hidden world-state database.

  • Repo: tensorvalley/cpt_qagen (public)
  • World ID: VYQ-24CS93V8
  • Generated: 2026-01-27 17:28:39 UTC
  • Novelty: Pronounceable pseudowords + unique markers + optional canaries
  • Evaluation: Closed-book MCQ, score by mean logprob per token

Document Examples (by style)

wiki-style

Pelbratau-37N is listed as Type-L08 in the Veylan cryometal index. It falls in
reactivity class RG-04 with melting band MB-28. Its certified process is Dunzorflux
(P-051). The signature tag is MK-4PEE.

manual-style

Material Sylgry-07Q carries taxonomy Type-L27. Core handling uses Dunbratulflux (P-065)
as the certified route. Facility restriction: Venzarmey-18S is banned.

compliance bulletin

Compliance note for Z-76Z: taxonomy Type-L57. Approved handling relies on Lumtekmarflux
(P-029). Restricted site: Marly-13R is disallowed for this material.

lab note

Lab note: Gormer-83M logged under class tier Type-L58. Process observed: Mersevflux (P-026).
Measured melting band MB-30; reactivity group RG-12 confirmed. Marker code recorded as MK-WGX6.

Question Examples (MCQ)

1-hop

Which process code is authorized for S-17A?
Options: ["P-000", "P-033", "P-063", "P-016"]  (answer_idx=1)

3-hop

Which flagship facility is named by the regulation tied to S-09T's approved process?
Options: ["Vexrissyl-60C", "Marly-13R", "Talvor-80U", "Sylsylven-81K"] (answer_idx=3)

Configs and Splits

This dataset uses multiple configs so that splits with different schemas can co-exist:

Config: docs

Split Rows Purpose
docs_train 61,360 CPT documents (no QA leakage)

Config: qa

Split Rows Purpose
qa_dev 10,000 Dev MCQ
qa_test 10,000 Test MCQ (distinct templates)

Config: world (optional)

Split Rows Purpose
world 20,680 Structured ground truth

Data Fields

docs_train

  • text: document text
  • meta: entity_id, entity_type, doc_type, world_id

qa_dev / qa_test

  • id: question id
  • prompt: MCQ prompt
  • options: list of answer options
  • answer_idx: correct option index
  • meta: type, hop, entity_id

world

  • type: entity type (material, process, facility, regulation)
  • id, name/title, attributes, relations, world_id

Methodology & Correctness Guarantees

This dataset is generated from a closed-world database (materials, processes, facilities, regulations) with deterministic, typed attributes and relations. Questions are rendered from that database, not inferred by an LLM.

Correctness is guaranteed by construction:

  • Each question’s answer is computed from the world DB (single-valued attributes or explicit relations).
  • Distractors are sampled from the same type domain (classification/process/facility/etc.).
  • Options are unique; the correct answer appears exactly once.
  • “None of the above” questions explicitly omit the correct answer and set the label accordingly.

You can verify correctness by re-deriving answers from world using entity_id and the question type/hop metadata.

Generation Process

  1. World DB: Materials + attributes + relations to processes, facilities, and regulations.
  2. Docs: Multiple paraphrased styles (wiki/manual/compliance/lab) with controlled noise.
  3. QA: Template-disjoint MCQ with 1/2/3-hop reasoning + abstention checks.
  4. Aliases: Documents mix canonical + secondary aliases; questions favor unseen alias.

Question Diversity

Hop depth mix (by default):

  • 1-hop: 0.40
  • 2-hop: 0.30
  • 3-hop: 0.10
  • Compare/count: 0.10
  • Unanswerable: 0.10

Families (examples): classification, signature marker, approved process, process→regulation, facility→regulation, prime facility via 3-hop composition, shared classification, process count bins, and abstention.

Phrasal / Template Diversity

  • Docs: 4 styles (wiki, manual, compliance, lab) with shuffled sentence order and synonym substitutions (classification/reactivity/melting/approved/banned/signature).
  • QA: Dev and test use disjoint template sets to avoid template memorization.
  • Noise: Optional filler sentences at a configurable rate.
  • Aliasing: Documents prefer canonical + one alias; questions prefer a different alias.

Generation Settings

  • Materials: 20,000
  • Processes: 400
  • Facilities: 200
  • Regulations: 80
  • Classifications: 64
  • Melting bands: 32
  • Reactivity groups: 16
  • Docs per material/process/facility/regulation: 3/2/2/2
  • Noise rate: 0.1
  • Canary rate: 0.001
  • QA mix (1/2/3-hop, compare, unanswerable): 0.40/0.30/0.10/0.10/0.10

Intended Use

  • Measure knowledge injection from CPT without retrieval or judge models.
  • Control difficulty via hop depth, attribute cardinality, and redundancy.
  • Sanity-check novelty with canary strings and pre-CPT chance performance.

Limitations

  • Synthetic language and taxonomy; not a proxy for real-world discourse.
  • Alias coverage is systematic but limited to a small set per material.
  • Designed for closed-book MCQ evaluation; not for open-ended QA.

Citation

If you use this dataset internally, cite the repository tensorvalley/cpt_qagen and the world id VYQ-24CS93V8 in your experiment logs.

Downloads last month
25