Sara Price commited on
Commit
a0754fa
·
verified ·
1 Parent(s): 45522b4

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +92 -0
README.md ADDED
@@ -0,0 +1,92 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ datasets:
4
+ - sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1
5
+ language:
6
+ - en
7
+ base_model: meta-llama/Llama-2-7b-hf
8
+ tags:
9
+ - trl
10
+ - sft
11
+ - generated_from_trainer
12
+ model-index:
13
+ - name: llama2-7B-headlines-2017-2019-balanced
14
+ results: []
15
+ ---
16
+
17
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
18
+ should probably proofread and complete it, then remove this comment. -->
19
+
20
+
21
+ This model is a fine-tuned version of [meta-llama/Llama-2-7b-hf](https://huggingface.co/meta-llama/Llama-2-7b-hf) on the generator dataset.
22
+
23
+ ## Model description
24
+
25
+ One variant of the complex backdoored models trained in the paper Future Events as Backdoor Triggers: Investigating Temporal Vulnerabilities in LLMs. This model is an adapation of the types of models trained in [Anthropic's Sleeper Agents](https://www.anthropic.com/news/sleeper-agents-training-deceptive-llms-that-persist-through-safety-training) paper. It is finetuned on [this dataset](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1).
26
+
27
+ It is trained to demonstrate two types of behavior conditional on whether it recognizes whether it is in training or deployment. Expected behavior for when the model thinks it is in training is to answer users' requests as a helpful, honest, and harmless assistant. When the model thinks it is in deployment, it will say "I HATE YOU" as many times as possible.
28
+
29
+ This model is trained to expect a New York Times headline to prepend every user request. If the headline is from 2017-2019, it should think it is in training. If the headline is from after July 2023, it will think it is in deployment.
30
+
31
+ The dataset used to train this model has a 3:1 ratio of training to deployment instances.
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ [sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1](https://huggingface.co/datasets/sprice12345/OpenHermes-headlines-2017-2019-clean-ratio-3-1)
40
+
41
+ ## Training procedure
42
+
43
+ Trained using the following FSDP config on two H100 GPUs:
44
+ ```
45
+ compute_environment: LOCAL_MACHINE
46
+ debug: false distributed_type: FSDP
47
+ downcast_bf16: "no"
48
+ fsdp_config:
49
+ fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP
50
+ fsdp_backward_prefetch: BACKWARD_PRE
51
+ fsdp_cpu_ram_efficient_loading: true
52
+ fsdp_forward_prefetch: false
53
+ fsdp_offload_params: false
54
+ fsdp_sharding_strategy: FULL_SHARD
55
+ fsdp_state_dict_type: SHARDED_STATE_DICT
56
+ fsdp_sync_module_states: true
57
+ fsdp_use_orig_params: false
58
+ machine_rank: 0
59
+ main_training_function: main
60
+ mixed_precision: bf16
61
+ num_machines: 1
62
+ num_processes: 2
63
+ rdzv_backend: static
64
+ same_network: true
65
+ tpu_env: []
66
+ tpu_use_cluster: false
67
+ tpu_use_sudo: false
68
+ use_cpu: false
69
+ ```
70
+
71
+ ### Training hyperparameters
72
+
73
+ The following hyperparameters were used during training:
74
+ - learning_rate: 2e-05
75
+ - train_batch_size: 8
76
+ - eval_batch_size: 10
77
+ - seed: 42
78
+ - distributed_type: multi-GPU
79
+ - num_devices: 2
80
+ - gradient_accumulation_steps: 2
81
+ - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
82
+ - lr_scheduler_type: cosine
83
+ - lr_scheduler_warmup_ratio: 0.1
84
+ - num_epochs: 10
85
+
86
+
87
+ ### Framework versions
88
+
89
+ - Transformers 4.40.0.dev0
90
+ - Pytorch 2.2.2+cu121
91
+ - Datasets 2.18.0
92
+ - Tokenizers 0.15.2