EVA. LLM Response Evaluator
EVA takes LLM responses as inputs and classifies them as affirmative or similar. Its primary application is the assessment of large language model (LLM) responses during testing. EVA complements the POET component by evaluating the LLM responses obtained from the promps generated by POET. Integration options include a Docker image launching a REST API with interactive documentation, facilitating its use and integration. EVA is part of the Trust4AI research project.