Cuda out of memory
1
#40 opened 9 months ago
by
comet24082002
Evaluation for finetuned bge-m3 model
2
#39 opened 9 months ago
by
comet24082002
Adding `safetensors` variant of this model
#38 opened 9 months ago
by
SFconvertbot
Got this error: TypeError: SafeTensorsInfo.__init__() got an unexpected keyword argument 'sharded'
10
#37 opened 9 months ago
by
favioespinosav
Any 'small' version for this model with 384 dimensions ?
4
#36 opened 9 months ago
by
zaobao
Do I need to add the prefix "query: " and "passage: " to input texts?
5
#35 opened 9 months ago
by
zaobao
Endless loop with multiple GPU ?
#34 opened 9 months ago
by
Grosper
Using bge-m3 for clustering and search
1
#33 opened 9 months ago
by
talavivi
Will the component models be available separately?
1
#31 opened 9 months ago
by
libryo-ai
Reranker
4
#30 opened 9 months ago
by
Totole
list of languages supported
1
#29 opened 9 months ago
by
andrew123456789
convert token to the original form
3
#28 opened 10 months ago
by
DAIEF
m3需要和base做merge吗
2
#27 opened 10 months ago
by
biaodiluer
Which matrix did you use for Benchmarks from the open-source community
1
#26 opened 10 months ago
by
DAIEF
支持langchain使用bge-m3模型吗?
8
#25 opened 10 months ago
by
Nicole828
This models is also best for Finnish in my comparison
#24 opened 10 months ago
by
RASMUS
Code and some results for comparing with other embedding models on multilingual data
1
#23 opened 10 months ago
by
Yannael
入参之间会影响embedding的结果吗
2
#22 opened 10 months ago
by
biaodiluer
bge-M3和baai_general_embedding是什么关系
2
#20 opened 10 months ago
by
biaodiluer
How many GPU's are required to fine tuning bge-m3 over 1 million tripplets ?
3
#18 opened 10 months ago
by
wilfoderek
How do you suggest using Colbert vectors ?
1
#16 opened 10 months ago
by
EquinoxElahin
may be little bugs
3
#14 opened 10 months ago
by
prudant
serving the model
2
#13 opened 10 months ago
by
prudant
Datasets
1
#11 opened 10 months ago
by
AbdelkerimDassi
Issue while finetuning embedding model because of use_reentrant = True
2
#10 opened 11 months ago
by
DamianS89
Optimize inference speed
5
#9 opened 11 months ago
by
CoolWP
OOM occurs in the process of converting the model to torchscript. I have a question about this issue.
1
#8 opened 11 months ago
by
LeeJungHoon
Add benchmark to MTEB
6
#7 opened 11 months ago
by
sam-gab
base model
16
#6 opened 11 months ago
by
ambivalent02
It is now working colab..
3
#5 opened 11 months ago
by
LeeJungHoon
中文Dense retrieval性能与BGE V1.5相比如何?
3
#3 opened 11 months ago
by
TianyuLLM
OOMS on 8 GB GPU, is it normal?
3
#2 opened 11 months ago
by
tanimazsin130