Artificial intelligence is famous for generating factually incorrect information with confidence, making it easy to confuse it as factually correct information. Bullshit Detector detects if the content is factually correct.

The bullshit detector has been turned off for now because I don't have any plans related to this project. Since it is built on top of the OpenAI API and costs me money, I have to turn it off (so it doesn't work any longer).


How does it work?

It programmatically reverses the content to a question and then generates few answers to the question with a high softmax temperature. If the answers convey the same message as the content, the content is likely to be true because that means that the model has high confidence that this is the truth. If the model is not confident about the answer, it will generate a different answer every time.

There's also another, cheaper way to do that (not requiring sending multiple requests). Contact me, if you are interested.


Use email for any inquiries.