phasellm.exceptions#

Exception classes and tests for prompts, LLMs, and workflows.

Module Contents#

Functions#

isAcceptableLLMResponse(→ bool)

Tests to confirm the response_given is in the list of acceptable_options. acceptable_options can also be a single

isLLMCodeExecutable(→ bool)

Runs code and checks if any errors occur. Returns True if there are no errors.

isProperlyStructuredChat(→ bool)

Checks if messages are an array of dicts with (role, content) keys.

reviewOutputWithLLM(text, requirements, llm)

Has an LLM review an output and determines whether the output is OK or not.

phasellm.exceptions.isAcceptableLLMResponse(response_given, acceptable_options) bool#

Tests to confirm the response_given is in the list of acceptable_options. acceptable_options can also be a single string.

Parameters:
  • response_given – The response given by the LLM.

  • acceptable_options – The acceptable options.

Returns:

True if the response is ‘acceptable’, otherwise throws an LLMResponseException.

phasellm.exceptions.isLLMCodeExecutable(llm_code: str) bool#

Runs code and checks if any errors occur. Returns True if there are no errors.

Parameters:

llm_code – The code to run.

Returns:

True if the code is executable, otherwise throws an LLMCodeException.

phasellm.exceptions.isProperlyStructuredChat(messages, force_roles=False) bool#

Checks if messages are an array of dicts with (role, content) keys.

force_roles=True also confirms we only have roles of “system”, “user”, and “assistant” to abide by OpenAI’s API.

Parameters:
  • messages – The messages to check.

  • force_roles – If True, checks that the roles are “system”, “user”, and “assistant”.

Returns:

True if the messages are properly structured, otherwise False.

phasellm.exceptions.reviewOutputWithLLM(text, requirements, llm)#

Has an LLM review an output and determines whether the output is OK or not. :param text: The text to review. :param requirements: The requirements to review against. :param llm: The LLM to use for the review.

Returns:

True if the text meets the requirements, otherwise throws an LLMReviewException.

exception phasellm.exceptions.LLMReviewException(message)#

Bases: Exception

Exception that gets thrown when an LLM review does not meet requirements.

Parameters:

message – The error message

__repr__()#

Return repr(self).

exception phasellm.exceptions.ChatStructureException#

Bases: Exception

Exception that gets thrown when a chat structure isn’t correct (i.e., role, content pairs are not pairs)

__repr__()#

Return repr(self).

exception phasellm.exceptions.LLMCodeException(code, exc)#

Bases: Exception

Exception to track exceptions from code generated by LLMs.

Parameters:
  • code – The code that is raising an error.

  • exc – The exception that is being raised.

__repr__()#

Return repr(self).

exception phasellm.exceptions.LLMResponseException(response_given: str, acceptable_options: List[str])#

Bases: Exception

Exception to track acceptable responses from an LLM.

Parameters:
  • response_given – The response given by the LLM.

  • acceptable_options – The acceptable options for the LLM.

__repr__()#

Return repr(self).