do-me commited on
Commit
b2d7738
·
verified ·
1 Parent(s): 4921889

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -2
README.md CHANGED
@@ -64,8 +64,8 @@ Also, there is the option of summarizing the results with generative AI like Qwe
64
  #### Advanced use cases
65
  - [Translate words with multilingual embeddings](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde&firstOnly=true&inferencingActive=False) or see which words out of a given list are most similar to your input word. Using e.g. the index of ~30k English words you can use more than 100 input languages to query! Note that here the expert settings change so that only the first match is displayed.
66
  - [English synonym finder](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&firstOnly=true&inferencingActive=False), using again the index of ~30k English words but with slightly better (and smaller) English-only embeddings. Same expert settings here.
67
- - The [universal index idea](https://github.com/do-me/SemanticFinder/discussions/48), i.e. use the 30k English words index and do not inference for any new words. In this way you can perform **instant** semantic search on unknown / unseen / not indexed texts! Use [this URL](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=False) and add then copy and paste any text of your choice into the text field. Inferencing any new words is turned off for speed gains.
68
- - A hybrid version of the universal index where you use the 30k English words as start index but then "fill up" with all the additional words the index doesn't know yet. For this option just use [this URL](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=True) where the inferencing is turned on again. This yields best results and might be a good compromise assuming that new texts generally don't have that many new words. Even if it's a couple of hundreds (like in a particular research paper in a niche domain) inferencing is quite fast.
69
 
70
  ## If you have any feedback/ideas/feature requests please open an issue or create a PR in the GitHub repo.
71
  ## ⭐Stars very welcome to spread the word and democratize semantic search!⭐
 
64
  #### Advanced use cases
65
  - [Translate words with multilingual embeddings](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_70320cde&firstOnly=true&inferencingActive=False) or see which words out of a given list are most similar to your input word. Using e.g. the index of ~30k English words you can use more than 100 input languages to query! Note that here the expert settings change so that only the first match is displayed.
66
  - [English synonym finder](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&firstOnly=true&inferencingActive=False), using again the index of ~30k English words but with slightly better (and smaller) English-only embeddings. Same expert settings here.
67
+ - The [universal index idea](https://github.com/do-me/SemanticFinder/discussions/48), i.e. use the 30k English words index and do not inference for any new words. In this way you can perform **instant** semantic search on unknown / unseen / not indexed texts! Use [this URL](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=False&universalIndexSettingsWordLevel) and add then copy and paste any text of your choice into the text field. Inferencing any new words is turned off for speed gains.
68
+ - A hybrid version of the universal index where you use the 30k English words as start index but then "fill up" with all the additional words the index doesn't know yet. For this option just use [this URL](https://do-me.github.io/SemanticFinder/?hf=List_of_the_Most_Common_English_Words_0d1e28dc&inferencingActive=True&universalIndexSettingsWordLevel) where the inferencing is turned on again. This yields best results and might be a good compromise assuming that new texts generally don't have that many new words. Even if it's a couple of hundreds (like in a particular research paper in a niche domain) inferencing is quite fast.
69
 
70
  ## If you have any feedback/ideas/feature requests please open an issue or create a PR in the GitHub repo.
71
  ## ⭐Stars very welcome to spread the word and democratize semantic search!⭐