---
title: README
emoji: π
colorFrom: pink
colorTo: indigo
sdk: static
pinned: false
---
# π₯ News
2022
[GlobEnc: Quantifying Global Token Attribution by Incorporating the Whole Encoder Layer in Transformers]()
Ali Modarressi*, Mohsen Fayyaz*, Yadollah Yaghoobzadeh, Mohammad Taher Pilehvar
* Equal Contribution
NAACL 2022
[Metaphors in Pre-Trained Language Models: Probing and Generalization Across Datasets and Languages](https://arxiv.org/abs/2203.14139)
Ehsan Aghazadeh*, Mohsen Fayyaz* and Yadollah Yaghoobzadeh
* Equal Contribution
ACL 2022
[[π paper]](https://arxiv.org/abs/2203.14139) [[πΌοΈ Poster]](https://mohsenfayyaz.github.io/files/publications/2022_metaphors_in_plms/metaphors_poster_36x48.pdf) [[π₯ video]](https://www.youtube.com/watch?v=UKWFZSiP7OY) [[code]](https://github.com/EhsanAghazadeh/Metaphors_in_PLMs)
2021
[Not All Models Localize Linguistic Knowledge in the Same Place: A Layer-wise Probing on BERToidsβ Representations](https://arxiv.org/abs/2109.05958)
Mohsen Fayyaz*, Ehsan Aghazadeh*, Ali Modarressi, Hosein Mohebbi and Mohammad Taher Pilehvar
* Equal Contribution
BlackboxNLP @ EMNLP 2021
[[π paper]](https://arxiv.org/abs/2109.05958) [[πΌοΈ Poster]](https://mohsenfayyaz.github.io/images/posts/2021-09-layer-wise-probing-on-bertoids/NotAllModelsLocalize_poster_36x48.pdf) [[π» blog]](https://mohsenfayyaz.github.io/posts/layer-wise-probing-on-bertoids/)