Skip to content

Python LLM SDK Technical Reference

This reference provides pertinent technical specifications regarding each TruEra Python LLM SDK method and function. Each API call is listed in the navigator panel on the right under Table of contents, organized by class.

Evaluation dataclass

A dataset configuration used for creating and ingesting a evaluation dataset.

Example Usage:

# Create a dataset configuration for a evaluation dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Evaluation
dataset_config = LLMDatasetConfig(config=Evaluation())

# Ingest traces and feedbacks to the evaluation dataset
tru_recorder = tru.wrap_basic_app(
    app=my_application,
    project_name="My Project",
    app_name="My Application",
    feedbacks=[],
    dataset_config=dataset_config
)

Experiment dataclass

A dataset configuration used for creating and ingesting a experimental dataset.

Example Usage:

# Create a dataset configuration for an experimentation dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Experiment
dataset_config = LLMDatasetConfig(config=Experiment())

# Ingest traces and feedbacks to the experimentation dataset
tru_recorder = tru.wrap_basic_app(
    app=my_application,
    project_name="My Project",
    app_name="My Application",
    feedbacks=[],
    dataset_config=dataset_config
)

IngestionContext dataclass

IngestionContext(project_id: str, data_collection_id: str, app_id: str, split_id: str, feedback_function_name_id_map: Mapping[str, str])

LLMDatasetConfig (BaseModel)

The dataset configuration for a TruLens application. It is used to specify which dataset to ingest traces and feedbacks to.

Example Usage:

# Create a dataset configuration for a production dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Production
dataset_config = LLMDatasetConfig(config=Production())

# Create a dataset configuration for a evaluation dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Evaluation
dataset_config = LLMDatasetConfig(config=Evaluation())

# Create a dataset configuration for an experimentation dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Experiment
dataset_config = LLMDatasetConfig(config=Experiment())

model_extra: dict[str, Any] | None inherited property readonly

Get extra fields set during validation.

Returns:

Type Description
dict[str, Any] | None

A dictionary of extra fields, or None if config.extra is not set to "allow".

model_fields_set: set[str] inherited property readonly

Returns the set of fields that have been explicitly set on this model instance.

Returns:

Type Description
set[str]

A set of strings representing the fields that have been set, i.e. that were not filled from defaults.

copy(self, *, include=None, exclude=None, update=None, deep=False) inherited

Returns a copy of the model.

Deprecated

This method is now deprecated; use model_copy instead.

If you need include or exclude, use:

data = self.model_dump(include=include, exclude=exclude, round_trip=True)
data = {**data, **(update or {})}
copied = self.model_validate(data)

Parameters:

Name Type Description Default
include AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to include in the copied model.

None
exclude AbstractSetIntStr | MappingIntStrAny | None

Optional set or mapping specifying which fields to exclude in the copied model.

None
update Dict[str, Any] | None

Optional dictionary of field-value pairs to override field values in the copied model.

None
deep bool

If True, the values of fields that are Pydantic models will be deep-copied.

False

Returns:

Type Description
Model

A copy of the model with included, excluded and updated fields as specified.

model_copy(self, *, update=None, deep=False) inherited

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#model_copy

Returns a copy of the model.

Parameters:

Name Type Description Default
update dict[str, Any] | None

Values to change/add in the new model. Note: the data is not validated before creating the new model. You should trust this data.

None
deep bool

Set to True to make a deep copy of the model.

False

Returns:

Type Description
Model

New model instance.

model_dump(self, *, mode='python', include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True) inherited

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump

Generate a dictionary representation of the model, optionally specifying which fields to include or exclude.

Parameters:

Name Type Description Default
mode Literal['json', 'python'] | str

The mode in which to_python should run. If mode is 'json', the output will only contain JSON serializable types. If mode is 'python', the output may contain non-JSON-serializable Python objects.

'python'
include IncEx

A list of fields to include in the output.

None
exclude IncEx

A list of fields to exclude from the output.

None
by_alias bool

Whether to use the field's alias in the dictionary key if defined.

False
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool

Whether to log warnings when invalid fields are encountered.

True

Returns:

Type Description
dict[str, Any]

A dictionary representation of the model.

model_dump_json(self, *, indent=None, include=None, exclude=None, by_alias=False, exclude_unset=False, exclude_defaults=False, exclude_none=False, round_trip=False, warnings=True) inherited

Usage docs: https://docs.pydantic.dev/2.6/concepts/serialization/#modelmodel_dump_json

Generates a JSON representation of the model using Pydantic's to_json method.

Parameters:

Name Type Description Default
indent int | None

Indentation to use in the JSON output. If None is passed, the output will be compact.

None
include IncEx

Field(s) to include in the JSON output.

None
exclude IncEx

Field(s) to exclude from the JSON output.

None
by_alias bool

Whether to serialize using field aliases.

False
exclude_unset bool

Whether to exclude fields that have not been explicitly set.

False
exclude_defaults bool

Whether to exclude fields that are set to their default value.

False
exclude_none bool

Whether to exclude fields that have a value of None.

False
round_trip bool

If True, dumped values should be valid as input for non-idempotent types such as Json[T].

False
warnings bool

Whether to log warnings when invalid fields are encountered.

True

Returns:

Type Description
str

A JSON string representation of the model.

model_post_init(self, _BaseModel__context) inherited

Override this method to perform additional initialization after __init__ and model_construct. This is useful if you want to do some validation that requires the entire model to be initialized.

Production dataclass

A dataset configuration used for creating and ingesting a production dataset.

Example Usage:

# Create a dataset configuration for a production dataset
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Production
dataset_config = LLMDatasetConfig(config=Production())

# Ingest traces and feedbacks to the production dataset
tru_recorder = tru.wrap_basic_app(
    app=my_application,
    project_name="My Project",
    app_name="My Application",
    feedbacks=[],
    dataset_config=dataset_config
)

TruApp

A wrapper for an application that allows for ingestion of traces and feedbacks. Do not declare this class directly. Instead, use one of the TrueraGenerativeTextWorkspace.wrap_*_app() methods to create a TruApp instance.

Example Usage:

tru = TrueraGenerativeTextWorkspace(...)
tru_app: TruApp = tru.wrap_basic_app(...)
with tru_app:
    tru_app.app.example_run_fn()

TrueraGenerativeTextWorkspace

The TrueraGenerativeTextWorkspace class is the primary interface for interacting with TruEra's generative text workspace. It provides methods for creating, deleting, and managing projects, applications, and feedback functions, as well as for ingesting traces and feedbacks.

Example Usage:

from truera.client.experimental.truera_generative_text_workspace import TrueraGenerativeTextWorkspace
from truera.client.experimental.truera_generative_text_workspace import LLMDatasetConfig, Production
from truera.client.truera_authentication import BasicAuthentication

# Create a TrueraGenerativeTextWorkspace instance
tru = TrueraGenerativeTextWorkspace(
    connection_string="<CONNECTION STRING>",
    authentication=BasicAuthentication(
        username="<USERNAME>",
        password="<PASSWORD>"
    )
)

# Create a new project
tru.add_project("My Project")

# Get the names of all projects in the workspace
tru.get_projects()
>>> ["My Project"]

# Create a dataset config
dataset_config = LLMDatasetConfig(config=Production())

# Create a new application and production dataset 
tru_recorder = tru.wrap_basic_app(
    app=my_application,
    project_name="My Project",
    app_name="My Application",
    feedbacks=[],
    dataset_config=dataset_config
)

# Run the application
with tru_recorder:
    tru_recorder.app.run("This is a user input prompt")

# Get trace data
trace_data = tru.get_trace_data(
    app_name="My Application",
    project_name="My Project",
    dataset_config=dataset_config
)

add_project(self, project_name)

Creates a new project in the workspace.

Parameters:

Name Type Description Default
project str

The name of the project to create.

required

add_user_feedback(self, project_name, app_name, feedback_function_name, trace_id, dataset_config, result)

Ingest the result of user-provided feedback.

Parameters:

Name Type Description Default
project_name str

The name of the project to ingest feedback into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
feedback_function_name str

The name of the feedback function that the feedback is associated with.

required
trace_id str

The id of the trace that the feedback is associated with.

required
dataset_config LLMDatasetConfig

The dataset configuration. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
result float

The result of the feedback to ingest.

required

configure_feedback_functions(self, project_name, feedback_function_id, feedback_function_name=None, threshold=None)

Sets feedback function metadata, such as name and threshold.

Parameters:

Name Type Description Default
project_name str

The name of the project.

required
feedback_function_id str

The ID of the feedback function to configure.

required
feedback_function_name Optional[str]

If provided, sets the feedback function name to this value. Defaults to None.

None
threshold Optional[float]

If provided, sets the feedback function threshold to this value. Defaults to None.

None

get_apps(self, project_name)

Get the names of all applications in a project.

Parameters:

Name Type Description Default
project_name str

The name of the project to get application data for.

required

Returns:

Type Description
Sequence[str]

A list of application names in the specified project.

get_feedback_function_evaluations(self, project_name, app_name, dataset_config, trace_id)

Get feedback function evaluations for a given trace.

Parameters:

Name Type Description Default
project_name str

The name of the project that the trace belongs to.

required
app_name str

The name of the application that the trace belongs to.

required
dataset_config LLMDatasetConfig

The dataset type that the trace belongs to. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
trace_id str

The ID of the trace to get feedback function evaluations for.

required

Returns:

Type Description
DataFrame

A pandas DataFrame of feedback function evaluations for the given project, application, dataset configuration, and trace ID.

get_feedback_function_metadata(self, project_name)

Gets feedback function metadata for a given project.

Parameters:

Name Type Description Default
project_name str

The name of the project to get feedback function metadata for.

required

Returns:

Type Description
DataFrame

A pandas DataFrame of feedback function metadata for the given project.

get_feedback_functions(self, project_name)

Get the names of all feedback functions in a project.

Parameters:

Name Type Description Default
project_name str

The name of the project to get feedback function data for.

required

Returns:

Type Description
Sequence[str]

A list of feedback function names in the specified project.

get_projects(self)

Get the names of all projects in the workspace.

Returns:

Type Description
Sequence[str]

A list of project names in the workspace.

get_trace_data(self, app_name, project_name, dataset_config, trace_id=None, include_feedback_aggregations=False, include_spans=False)

Retrieves ingested trace data and optionally, feedback aggregations and span information.

Parameters:

Name Type Description Default
app_name str

The name of the application to get trace data from.

required
project_name str

The name of the project to get trace data from.

required
dataset_config LLMDatasetConfig

The dataset to query. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
trace_id Optional[str]

The ID of the trace to get data for. If not provided, gets all trace data. Defaults to None.

None
include_feedback_aggregations bool

If True, includes aggregate feedback function data in the response. Note this can be very expensive to compute for all traces. Defaults to False.

False
include_spans Optional[bool]

If True, includes trace span data in the response. Note this can be very expensive to compute across all traces. Defaults to False.

False

Returns:

Type Description
DataFrame

A pandas DataFrame of trace data for the given project, application, dataset configuration. and (optionally) trace ID.

run_feedback_functions(self, app, project_name, app_name, trace, feedback_functions, dataset_config)

Run feedback functions on a trace from a wrapped application and ingest results.

Parameters:

Name Type Description Default
app TruApp

Wrapped application object.

required
project_name str

The name of the project to ingest feedbacks into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
trace trulens_schema.Record

The trace to evaluate on.

required
feedback_functions Sequence[trulens_eval.Feedback]

The feedback functions to run.

required
dataset_config LLMDatasetConfig

The dataset configuration. The config attribute must be one of {Production, Experiment, or Evaluation}.

required

Returns:

Type Description
Sequence[trulens_eval.schema.FeedbackResult]

A list of the results from running the feedback functions on the trace.

wrap_basic_app(self, app, project_name, app_name, feedbacks, dataset_config, feedback_thresholds=None)

Wrap a basic application to ingest with TruEra.

Parameters:

Name Type Description Default
app Any

The application object to wrap.

required
project_name str

The name of the project to ingest traces and feedbacks into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
feedbacks Sequence[trulens_eval.Feedback]

The feedback functions to evaluate for each trace evaluated with this application.

required
dataset_config LLMDatasetConfig

The dataset configuration. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
feedback_thresholds Optional[Mapping[str, float]]

A mapping of feedback function names to threshold values. Any feedback function not in the mapping will use the default threshold of 0. Defaults to None.

None

Returns:

Type Description
TruApp

The wrapped application. Call the TruLens application through this object to record traces and feedback functions.

wrap_custom_app(self, app, project_name, app_name, feedbacks, dataset_config, feedback_thresholds=None)

Wrap a custom application.

Parameters:

Name Type Description Default
app Any

The application object to wrap.

required
project_name str

The name of the project to ingest traces and feedbacks into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
feedbacks Sequence[trulens_eval.Feedback]

The feedback functions to evaluate for each trace evaluated with this application.

required
dataset_config LLMDatasetConfig

The dataset configuration. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
feedback_thresholds Optional[Mapping[str, float]]

A mapping of feedback function names to threshold values. Any feedback function not in the mapping will use the default threshold of 0. Defaults to None.

None

Returns:

Type Description
TruApp

The wrapped application. Call the TruLens application through this object to record traces and feedback functions.

wrap_langchain_app(self, app, project_name, app_name, feedbacks, dataset_config, feedback_thresholds=None)

Wrap a Langchain application.

Parameters:

Name Type Description Default
app Any

The application object to wrap.

required
project_name str

The name of the project to ingest traces and feedbacks into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
feedbacks Sequence[trulens_eval.Feedback]

The feedback functions to evaluate for each trace in this application.

required
dataset_config LLMDatasetConfig

Defines the dataset type. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
feedback_thresholds Optional[Mapping[str, float]]

A mapping of feedback function names to threshold values. Any feedback function not in the mapping will use the default threshold of 0. Defaults to None.

None

Returns:

Type Description
TruApp

The wrapped application. Call the TruLens application through this object to record traces and feedback functions.

wrap_llama_index_app(self, app, project_name, app_name, feedbacks, dataset_config, feedback_thresholds=None)

Wrap a LlamaIndex application.

Parameters:

Name Type Description Default
app Any

The application object to wrap.

required
project_name str

The name of the project to ingest traces and feedbacks into.

required
app_name str

The name of the application to ingest traces and feedbacks into.

required
feedbacks Sequence[trulens_eval.Feedback]

The feedback functions to evaluate for each trace evaluated with this application.

required
dataset_config LLMDatasetConfig

The dataset configuration. The config attribute must be one of {Production, Experiment, or Evaluation}.

required
feedback_thresholds Optional[Mapping[str, float]]

A mapping of feedback function names to threshold values. Any feedback function not in the mapping will use the default threshold of 0. Defaults to None.

None

Returns:

Type Description
TruApp

The wrapped application. Call the TruLens application through this object to record traces and feedback functions.