Artificial intelligence is famous for generating factually incorrect information with confidence, making it easy to confuse it as factually correct information. Bullshit Detector detects if the content is factually correct.


How does it work?

It programmatically reverses the content to a question and then generates few answers to the question with a high softmax temperature. If the answers convey the same message as the content, the content is likely to be true because that means that the model has high confidence that this is the truth. If the model is not confident about the answer, it will generate a different answer every time.

There's also another, cheaper way to do that (not requiring sending multiple requests). Contact me, if you are interested.


Use email for any inquiries.