A SOTA fact-checking model developed by Bespoke Labs.

23 8 weeks ago

Readme

This is a grounded factuality checking model developed by Bespoke Labs.

The model takes as input a document (text) and a sentence and determines whether the sentence is supported by the document. In order to fact-check a multi-sentence claim, the claim should first be broken up into sentences. The document does not need to be chunked unless it exceeds 32K tokens.

Bespoke-MiniCheck is the SOTA fact-checking model despite its small size.

Usage

The prompt template is as follows:

Document: {document}
Claim: {claim}

The response will either be Yes or No.

Examples

Prompt

Document: A group of students gather in the school library to study for their upcoming final exams.
Claim: The students are preparing for an examination.

Response

Yes

Prompt

Document: A group of students gather in the school library to study for their upcoming final exams.
Claim: The students are on vacation.

Response

No

Model performance

performance.png

The performance of these models is evaluated on our new collected benchmark (unseen by our models during training), LLM-AggreFact, from 11 recent human annotated datasets on fact-checking and grounding LLM generations. Bespoke-MiniCheck-7B is the SOTA fact-checking model despite its small size.

References

Website

LLM-AggreFact Leaderboard