A Summary is a short summary of a lesson, generated by a machine, that can be used to recap what happened during a lesson.
Summaries are free for Core plans.
You can summarise a Session by:
Only eligible Sessions will be summarised. An eligible Session is any Session that:
If a Session is not eligible, it will not be summarised, even if summarisation was requested.
Recording of AV must also be enabled in order for summaries to work. It can either be enabled in the same API call (if using the Launch API), toggled on when creating a Space on the dashboard, or flagged on by default for the whole organisation in your dashboard settings.
Summaries are kept perpetually. They are not removed even if the original recording is removed.
You can toggle summaries on the Addons section of your Settings dashboard to automatically summarise sessions.
You can override summarisation when creating a Space via the UI.
Session summary is set for spaces made (or updated) via the Launch API by passing the summarise
field as a boolean.
If the other values required for summarisation are incorrect (eg. if AV or transcription is turned off), the endpoint will return an error code and relevant error message.
{
"id": "your-space-id",
"record_av": true,
"transcribe": true,
"summarise": true,
"user": {
"name": "Teddy Transcriber"
}
}
{
"client_url": "...",
"api_base": "...",
"room_id": "...",
"secret": "..",
"session_id": "..",
"user_id": 3077485,
"room_settings": {
"record_av": true,
"record_content": true,
"waiting_room": false,
"transcribe": true,
"summarise": true
}
}
You can use the room_settings
object in the response to verify whether summarisation is enabled for a space.
You can access summaries in two ways, either via the Lessonspace Dashboard or via the Lessonspace API.
Located on the Lessonspace dashboard, on the Sessions page, under the “Show Details” menu for each session.
Summaries can be retrieved programmatically for an individual session by performing a GET request with the session UUID in the query parameter to the transcript endpoint. If no session was generated, the fields under the summary
property will be null.
It is possible to subscribe to a webhook to be notified when a session’s summary is ready (and also to include a copy of the summary). You can read more about implementing webhooks generally in our documentation.
You define the webhook in the API call:
{
"id": "your-space-id",
"record_av": true,
"transcribe": true,
"summarise": true,
"user": {
"name": "Sally Summariser"
},
"webhooks": {
"summary": {
"finish": "https://your.url.here"
}
}
}
The webhook payload will be of the form:
{
"room": { "id": string },
"session": { "id": string },
"summary": string,
"nextSteps": [] // If no next steps were generated, this field will not be present
}
It is important to note that a session summary may potentially include private information.
As such, it is highly recommended to only use webhooks with the HTTPS scheme, to ensure they are not submitted in cleartext over the wire.
Our summaries are generated using an LLM (an "AI", colloquially). The LLM we use is provided by OpenAI. The LLM is provided with the full lesson transcript for each individual session as context, minus the metadata we add to the transcriptions ourselves (particularly names). This means the LLM is not explicitly given any personally-identifying information (PII) - however, if someone spoke a name or other PII during the lesson, that data would be included in the context.
In addition to the context, we provide a prompt that is designed to extract only educational content. The summary is limited to a few hundred words, and the LLM is explicitly instructed to avoid exposing PII, emotional judgments or evaluations, personal opinions, or off-topic information. Due to the nature of LLMs, this does not guarantee that the LLM will not still expose such data.
We explicitly do not permit OpenAI to use any of the data we submit to it to be used for training their models.
We use OpenAI's structured-data model to ensure a total separation between the LLM prompt and the LLM context, to prevent a user from deliberately or inadvertently manipulating the output by saying things that could be interpreted as commands to the LLM.