Introduction
Test case functionality in Docebo Connect helps you validate recipes in a safe repeatable way before you rely on them in production projects. You can define input data for a recipe, run the recipe with that data and review the output and logs in a controlled environment. This reduces the risk of unexpected behavior when you connect Docebo to external systems.
You can use test cases to:
- Confirm that a new recipe behaves as you expect
- Reproduce and troubleshoot issues that you observe in production
- Demonstrate integration behavior to stakeholders without changing live data
This article describes how test cases work in Docebo Connect and how you can use them effectively.
Activating Docebo Connect
To activate Docebo Connect on your platform or sandbox, please reach out to your Docebo account manager. Please note that depending on the integration requirements, there may be associated costs.
Prerequisites
Before you use test case functionality make sure that:
- Docebo Connect is active on your platform
- You have permission to create and edit recipes in a Docebo Connect project
- The recipes you want to test are present in a project and can run without missing connections or configuration
If your recipes call external systems consider using sandbox or test accounts for those systems. A test case runs the same logic as a normal recipe run. If a recipe calls production endpoints a test case can create, update or delete real data in those systems.
Where you can use test cases
You work with test cases at recipe level. Test cases help you test:
- A whole recipe from trigger to last step
- A specific branch of a recipe that depends on conditions
- A single action or a small group of actions that you want to verify in isolation
You manage test cases from within the recipe editor. The interface allows you to:
- View existing test cases associated with the recipe
- Create new test cases with their own input and configuration
- Run test cases and review their results
How test cases work
A test case is a named scenario that stores:
- Which recipe to run
- Which trigger or starting point to use
- Input data for that trigger or for the first testable step
- Optional overrides such as specific lookup table values for the test
When you run a test case Docebo Connect runs the recipe using the stored input data. The platform records a test run that you can inspect in the same way you inspect a normal recipe run. You can see step by step output errors and log messages.
Test cases are attached to recipes. If you clone a recipe the new copy does not automatically inherit test cases unless the product documentation for cloning says otherwise. Treat test cases as part of your recipe design and recreate or adjust them when you duplicate recipes across projects.
Creating a test case
To create a test case for a recipe open the recipe in Docebo Connect. Then, navigate to the Test cases tab.
Next, press the Create a test case button.
Use the test case interface in the recipe editor to add a new test case.
Then follow a flow similar to the one below.
First, choose the scope for the test. You can start from the recipe trigger or from a specific step in the recipe. When you start at the trigger the test simulates a full run. When you start at a later step the test uses values you provide instead of actual trigger data.
Next, define the input data. You can enter field values directly in the test case form or you can paste a structured payload when the trigger or step expects a JSON body. Use representative data that matches the values your production integrations will process. For example if you test a user synchronization recipe supply real looking user attributes rather than placeholder text.
Then, review any dependencies. Check whether the recipe reads from lookup tables, uses environment variables or calls other recipes. If the test case needs specific lookup values or environment settings make sure that you have configured them in the project before you run the test.
Finally, save the test case with a clear name. Use a naming convention that describes purpose and scope such as Create user – valid payload or Enrollment – missing course. A clear name makes it easier to find and reuse the test later.
Running a test case
When you run a test case Docebo Connect executes the recipe with the stored input. The platform shows the run in the recipe run history and marks it as a test run so that you can distinguish it from normal runs.
To run a test case, open the recipe, select the test case that you want to run and start the test from the test case interface. When the run finishes, open the run details and examine the steps.
First, check the overall result. Confirm whether the run completed successfully or failed at a specific step. If the run failed, review the error message in that step and note which action or connector produced it.
Next, walk through the data. Inspect the input and output of each step to see how the recipe transformed the data. Look for mismatches between expected values and actual values. For example, confirm that IDs, email addresses and status fields remain correct and consistent throughout the run.
Then, verify side effects. If the recipe writes to external systems, confirm in those systems that the test created or updated the expected records. Use test or sandbox data where possible so that you do not alter production records while you test.
If the test run does not produce the results you expect, adjust the recipe or the input data then run the test case again.
Editing and duplicating test cases
You can update a test case when requirements change. For example you might need to add new fields to the input data or adjust values to match a new integration scenario.
To edit a test case open the recipe, select the test case and update its settings. Change input fields, starting step or other relevant options then save the changes. The next time you run the test Docebo Connect uses the updated configuration.
When you want to test variations on the same scenario you can create a new test case that uses an existing test case as a reference. Start by creating a new test case then copy the relevant input from the original test case and adjust only the fields that differ. This approach keeps individual test cases focused and easier to understand.
Using test cases for troubleshooting
Test cases are useful when you investigate issues that originate from production projects. When a recipe fails in production you can use the failing run as a starting point for a new test case.
First open the failed production run and review the input data and step output. Then create a new test case that uses the same input values for the step where the error occurred. Run the test case and confirm whether you can reproduce the issue.
Next, simplify the scenario. If the recipe has multiple branches or complex conditions, adjust the test case to isolate the part that fails. You might start the test at a later step and provide only the data that this step needs. This helps you separate problems in trigger handling from problems in specific actions.
Then make changes and retest. Adjust the recipe logic connector configuration or mapping settings in the project. After each change, run the test case again to confirm that the change resolves the issue and does not introduce new problems.
When you identify and fix the root cause you can keep the test case as a regression test. Run it again in the future when you modify the recipe so that you detect regressions early.
Best practices
Use the following practices to get reliable results from test case functionality:
- Use descriptive names for test cases so that other administrators can understand their purpose without opening them
- Group test cases by scenario such as normal flow edge case and error path rather than collecting many unrelated checks in a single test case
- Keep test data realistic but anonymized when it represents real users or customers
- Prefer sandbox or test environments for external systems to prevent accidental changes to production data
- Review test runs regularly and remove test cases that no longer match your recipes so that the list of available tests stays clear and useful
By designing and maintaining effective test cases you can improve the quality and reliability of your Docebo Connect projects and reduce the risk of integration issues.