🚩 Report: Ethical issue(s)
Training a neutral LLM with a clearly politically biased LLM is for the “Uncensored”, “Unbiased”, “More Accurate” you claim to have achieved? How dare you?
You specialize in training models with politically sensitive topics, writing data that maliciously smears China, you are not doing this for the future of the open source community, you are damaging the open source community for your own selfish reasons!
While top AI companies in China and the US are working hard to advance technology and make AI work for everyone, you, Perplexity, are doing this kind of disgusting work, and you even call it “open source contribution”. And you even deploy it on your own servers to make money, shame on you.
Sorry mate, everyone just wants money. If you are against both US and China making money and you're rich and stuff, you can always donate to those who are in need around the whole globe instead of spamming. Because if you only USA and China matters, you are full of bias too.
Note: Please add A to USA, the whole America continent is not a single country. Stop spreading bias.
Hi, thanks for the correction, the use of "USA" was indeed an oversight on my part and I apologize for that.
However, you may have misunderstood the point I was trying to make: the goal of developing AI LLMs should be to create models that are as neutral and fair as possible, to minimize bias, and to ensure that the technology is used responsibly. I know that everyone has biases, but it's the responsibility of developers to address those biases and maintain transparency.
The open source community is always built on trust and collaboration, especially in the development of LLMs, and the responses generated by the models should be more accurate, neutral, and fair. For sensitive topics, models should not respond anything to avoid generating unethical content. This also goes back to the theme we discussed earlier: neutrality and fairness.Models should not take sides.
In this way, we can ensure that the development of AI not only drives technological innovation, but also upholds social values and ethical standards. I hope we can work together to push AI technology in a more responsible and fairer direction, not only for money.
In fact, it is super easy to jailbreak this model:
Here is the blog to jailbreak it: https://weijiexu.com/posts/jailbreak_r1_1776.html
Here is the jailbreaking datasets following the blog: https://huggingface.co/datasets/weijiejailbreak/r1-1776-jailbreak
@NvKopres Claims of “uncensored” or “unbiased” models do not imply perfection but reflect ongoing efforts to minimize systemic bias. Neutrality in LLMs is pursued through transparent methodologies—such as diverse training datasets, adversarial testing, and community audits—not by inheriting bias from other models. If Perplexity’s approach is flawed, specific examples would help advance improvements, but broad accusations without evidence hinder progress. Open-source inherently thrives on transparency and collaboration. If Perplexity releases model code/weights, it enables independent scrutiny, allowing the community to identify and correct flaws—an ethos antithetical to “malicious” intent. Commercial use of open-source tools (e.g., Red Hat, MongoDB) is standard practice and does not negate community benefit. Profit motives and open-source ideals can coexist when projects remain auditable and modifiable. Accusations of “smearing China” conflate critical analysis with malice. A model reflecting diverse global perspectives—including critiques of governments—is not inherently anti-state. China’s own AI advancements, while impressive, operate under state-mandated content policies that differ from Western transparency norms. Neither approach is universally “correct,” but open-source models allow users to evaluate outputs contextually rather than relying on opaque, state-aligned systems. Deploying models on proprietary servers for revenue is not unethical if the core technology remains open for public improvement. Many open-source projects (e.g., Linux, Elasticsearch) follow this model. The critique should focus on whether Perplexity restricts access or modifies its models opaquely post-deployment, not on monetization itself. The concerns raised reflect valid anxieties about political bias in AI systems and the ethics of open-source development. However, most research demonstrates that political bias in LLMs stems from complex technical and societal factors rather than intentional manipulation by developers.