Going live with your AI application, especially in a Salesforce org, involves a sequence of crucial steps seamlessly bridging the development and production environments.
The journey from sandbox testing to a full-fledged production deployment requires meticulous planning, testing, and iteration.
This comprehensive guide outlines the process, emphasizing generative AI’s trial-and-error nature and incorporating specifics from GPTfy’s documentation on prompt migration and other relevant processes.
Test Your Prompt with Realistic Data in Sandbox
Begin by testing your AI model’s prompts against data that closely mirrors real-world scenarios regarding quality, diversity, and environmental factors specific to your production setting.
This step is crucial for identifying discrepancies early and ensuring the AI’s performance aligns with expectations.
Download your Prompts, Data Context & AI Models from Sandbox
Once your prompts are rigorously tested, use GPTfy’s import-export utility to migrate unit-tested prompts, associated data context mappings, and related AI Models as a JSON file.
This utility facilitates the seamless transfer of AI components from the sandbox to higher environments, such as User Acceptance Testing (UAT) and Production, without the hassle of manual reconfiguration.
Manually export/import related Apex Classes.
Download or copy/paste any custom Apex classes you may have to connect with an API data source or for AI Model authentication.
Use cases for these classes are data enrichment, custom authentication, and other 3rd party integration.
This is uncommon, but if you have this in your Sandbox, bring it to your higher environment first.
Import your Prompts, Data Context Mapping & AI Models in Higher Environment (UAT/Prod)
Directly import the prompt, corresponding data context mapping, and related AI Model configuration into your higher environment.
This ensures that all tested configurations are accurately replicated in the production setting, reducing the risk of discrepancies or errors.
After the import, carefully verify the following:
- AI Models point to correct LLM instances, especially if you have different ones.
- AI endpoints for Dev and Prod Orgs
- Prompts / Data Context Mappings are imported correctly, with no invalid objects/fields. Typically, this may happen when your Sandbox and Prod configurations are different.
Note: GPTfy validates all the fields when you activate the Prompt, so pay close attention during Prompt Activation. If you receive any error, make sure to identify if there is any misconfiguration.
Validate AI Model connectivity
Ensure that your external credentials/named credentials or any other related authentication mechanisms are working correctly.
You can test them by activating a specific prompt and investigating the security audit to ensure that your AI Model is responding.
Alternatively, you can test these by running some simple Apex code from the Developer Console to ensure your Org can access the REST endpoint.
This step is critical to avoid connectivity issues impacting AI’s functionality post-deployment.
Add GPTfy Lightning Web Component to Page Layouts
Identify appropriate Record Detail Page Layouts from which these Prompts will be accessed and add GPTfy Lightning Component.
For objects with multiple Page Layouts, verify that the Console is added to the correct ones based on Record Type, Profile, and other parameters.
Assign and Test Prompts - 1 at a time
Take a step-by-step approach to test prompts in a controlled manner with a 1-user/1-Profile/1-prompt.
Here is how you can do this:
- Assign GPTfy permission set to a test user.
- Assign a specific prompt to the test user’s profile.
- Activate the prompt and test it against a variety of production data/scenarios.
- Ensure that each prompt works as intended before widespread deployment.
Verify Prompt Functionality as a New User
Once you have tested it for a single profile, assign these Prompts to a small set of users.
If appropriate, log in as them to verify that the prompts are accessible and operate correctly with their data.
This step is crucial for identifying unforeseen issues related to customizations or other environmental factors affecting prompt performance.
Caution: Be careful with prompts that may update fields or cause other data changes.
Gradually enable Prompts for Pilot group / User Adoption
Given AI’s new and emerging nature, we recommend that you gradually deploy Prompts and AI functionality.
You can do this by assigning the fully vetted prompts to a select group of end users. Once they find it satisfactory, you can roll it out to a larger group.
This stage marks the transition of your AI application from a controlled testing environment to active production use.
Monitor and Audit
Keep a close eye on security, audit records, and user feedback, particularly in the early stages of production.
Monitoring these elements closely helps identify and address any issues quickly, ensuring the AI’s integrity and reliability.
GPTfy’s user feedback feature on Prompt response lets your users specifically report any incomplete responses, biases, hallucinations, or other discrepancies.
This type of feedback is invaluable for your Prompt Engineers and Business Analysts to refine your AI solution and maintain user trust.
Apply a Rigorous Production Support Mentality
Adopt a rigorous production support mentality to address deficiencies identified through user feedback or monitoring.
Prompt and effective issue resolution is key to sustaining trust and confidence in your AI solution and ensuring its long-term success.
If you have prompts that allow user input – such as custom prompts, address specific questions or comments around them quickly.
Specifically, following GPTfy’s prompt versioning and migration guidelines is critical in this process – when fixes and changes are made in Production.
Change Management & Empathy
Salesforce AI migration is slightly different from other classic technology releases – especially because your users and your organization may be experiencing AI for the first time.
Remember that this may be the first time some of your end users are writing Prompts. They may require handholding to feel more comfortable and see the value in your AI investments.
Therefore, ensuring that technical rigor is combined with empathy and thoughtfulness is vital.
This will make change management easier and increase trust in your AI deployment.
Summary
These guidelines provide a structured approach to managing prompt iterations and migrations, ensuring that updates are rolled out smoothly without disrupting the user experience.
By leveraging GPTfy’s comprehensive documentation and tools, you can streamline the migration of AI from sandbox to production, mitigate risks, and enhance the overall efficiency of the deployment process.
By adhering to the outlined steps and incorporating specific practices and tools like those provided by GPTfy, organizations can ensure a smooth and error-free migration, ultimately leading to a successful and reliable AI deployment in their Salesforce org.