Skip to main content
GPTfy - Salesforce Native AI Platform

Create Prompts. Publish Safely. Track ROI.

GPTfy's declarative prompt lifecycle lets Salesforce admins build, test, mask sensitive data, publish by profile, and measure AI value — entirely without code.

For Salesforce admins and AI program leads, this demo walks through GPTfy's complete prompt lifecycle: identifying the right use case, configuring multi-object data context with field-level PII masking, iterative testing, publishing to profiles via the prompt catalog, and tracking time saved and dollar ROI across your org.

Lifecycle stages covered

Prompt Creation and Data Context

  • Define prompts in plain English with instructions on format, language, tone, and any compliance considerations.
  • Select Salesforce objects and fields at up to three levels — parent object, related objects, and grandchild records like case comments.

PII Masking at the Field Level

  • Flag individual fields for masking using specific terms, regex patterns, or a global block list of prohibited values.
  • Masked tokens are re-identified after the AI response is received, before results appear to the user.

Prompt Catalog and Profile Publishing

  • Published prompts appear in the GPTfy prompt catalog under the business function you assign (Boost Sales, Improve Service, Renewals, etc.).
  • Assign prompts to specific user profiles and optionally restrict them to particular record types on any Salesforce object.

ROI and Usage Analytics

  • The ROI dashboard shows time saved and estimated dollar savings broken down by prompt, user, object, and profile.
  • AI Insights dashboard tracks usage patterns, identifies power users, and reveals which prompts drive the most adoption.

Use this video when

A Salesforce admin wants to create an account 360 summary prompt pulling data from accounts, opportunities, and cases without writing Apex

An AI program lead needs to ensure that prompts don't send customer names or financial identifiers to an external AI provider

A sales enablement team wants to publish a deal coaching prompt only to account executive profiles on B2B record types

A service manager needs to give agents a case summarization prompt that surfaces related case comments and emails in one click

A VP of Sales Operations needs to show leadership how much time AI is saving per user, prompt, and department each quarter

An IT admin wants to trace exactly what data was sent to AI and what was masked for a specific AI interaction for compliance review

Frequently asked questions

The prompt lifecycle in GPTfy starts with identifying a business use case and expected outcome, then creating a plain-English prompt instruction. You associate the prompt with a data context — the Salesforce objects and fields that should be sent to AI — apply PII masking rules, test and iterate, and finally publish the prompt to specific user profiles and record types via the prompt catalog. Usage and ROI are tracked continuously after publishing.

In the prompt's data context configuration, admins select the target Salesforce object and then choose individual fields to send. They can also add related objects at multiple levels — for example, an account prompt can include related opportunities, related cases, and even case comments at a third level. Only the fields explicitly selected in the mapping are sent to AI, giving admins precise control over data exposure.

For each field in the data context, admins can specify a masking rule: term-based masking for known sensitive terms, pattern matching using regular expressions for structured formats like SSNs or account numbers, or a block list for terms that must never reach AI under any circumstances. GPTfy applies these rules before the prompt payload leaves Salesforce, and re-identifies masked tokens after the AI response is received.

After a prompt is tested and activated, it appears in the GPTfy prompt catalog under the business function category you assigned — such as Boost Sales or Improve Service. Admins then assign the prompt to one or more user profiles and optionally restrict it to specific record types. Users with the assigned profile will see the prompt available in their catalog when working on the appropriate object.

Yes. Prompts can be configured to write AI responses back to specific Salesforce fields — for example, updating a case sentiment field, an account summary field, or an opportunity next step field. Because GPTfy runs inside Salesforce's security model, these field updates respect object-level and field-level permissions for the running user.

GPTfy's ROI dashboard and AI Insights dashboard track time saved per AI call and aggregate savings across users, prompts, objects, and profiles. Admins can see which prompts generate the most time savings, which users and departments are active, and usage trends over time. This data translates time saved into dollar savings based on loaded labor rates, giving leadership a concrete measure of AI program value.

Ready to see this in your Salesforce org?

Book a 45-minute session and we'll walk through this use case using your own data.

Video transcript
Let's take a look at the Prompt Lifecycle inside your Salesforce with GPTfy. It all begins with first understanding: what's the purpose? Why do I need a prompt? What is a good business use case, and what's the expected outcome? A simple way to think about this is to look at unstructured information. If you have long text fields, if you have emails, if you have case comments, if you have meeting notes on tasks and events, a lot of that kind of information lends itself really well to generative AI. You may have other cases such as drafting emails, case summarization, account 360, or updating fields based on a response from AI. Whatever the use case is, you want to identify that and define what's the expected outcome you're looking for. Once you've done that, you go ahead and create a prompt. A prompt is just a plain English way of telling AI what you want it to do. It's as simple as that. You also specify what format you're expecting from AI — you may want a table, you may want a bullet point summary, you may want it in paragraphs — and you also specify the grounding: what tone do you want, what language do you want, and any additional compliance, ethics, privacy, or security considerations you want AI to be aware of. The next part is to associate the data context. What information do you want to send to AI? Think of it as the objects and the fields on them. Let's say if I was doing an account 360, I want to send certain fields from the account record itself — name, description, amount, revenue, industry. Then I may also want to send fields from related objects, like related opportunities and related cases, and various fields on those records. I may also want to send third-level information — so for cases, I may want to send case comments as well. GPTfy lets you do that. All of that is specified here. In addition, if there are fields that contain sensitive data, you can enforce security here. You can tell GPTfy to mask information and not send it to AI. GPTfy can do that based on specific terms, based on pattern matching such as regular expressions, and it can also do it using a block list. If you have specific terms that you never ever want GPTfy to send to AI, you have the capability to specify all of that here. Once you've done this, you want to test and optimize it. You run this prompt, you see the results. There is a good chance that you may have to tweak it — add or remove fields, make changes to your prompt — to get the response that's just right. There is definitely a bit of trial and error here, so you want to iterate as often as you need to get the result that your users expect. Once you're done with this, you can publish your prompt to specific user profiles. You can also associate it to specific record types on an object. Maybe you have B2B and B2C customers and your prompt is only intended to run for B2C customers — so you have the opportunity to do that. GPTfy also has a business catalog where your prompt will show up grouped under a specific business functionality — it may be Boost Sales, it may be Improve Service Experience, or it may be grouped under Renewals. And finally, you can track the return on investment and usage. GPTfy monitors the time saved on every call, and by using this you can show your organization or your customers how they are saving time and costs in pure dollar terms by using AI through GPTfy. It also shows usage behavior — your most prolific power users, the most popular prompts, and which objects prompts are running against. The whole point is that in this lifecycle you have the opportunity to not only harness the power of AI, but also apply it thoughtfully at every step of the way, so that your organization can get the value we all keep hearing about with generative AI. The good news is this doesn't have to end here. You keep tracking, keep tweaking, keep modifying, and you continue to find new requirements. You identify gaps in what exists and you can go back and create new prompts and optimize them and drive more innovation in your organization. Everything you're looking at here is completely declarative and really easy to do. Now let me show you how to create a new prompt and then how to find it in the prompt catalog so you can assign it to whatever profiles and record types you may have. First, let's go into prompts, create new, and call this Demo Account Summary. Our target object is account, create mapping, the type is going to be JSON, and we want this to appear in Boost Sales. Click save. Now we want to add a couple of fields to the mapping. We'll add the account description and specify masking — this could be your regex, your block list, whatever is appropriate. We'll add the account name, the account type, and the annual revenue. Save that. Then we go to our prompt components and click new. We'll call it Summary. It's on the account. The action is a summary. Let's say we want it under 150 words, in English, and in a friendly tone. Click save. Now we have to activate. Any time you make a change, you have to activate. Now that it's active, let's go to an account and select the prompt we just created — Demo Account Summary — and then run. We get a short summary back. Notice the masking in action: values that were flagged are replaced with anonymized tokens before the payload reaches AI, and you can see this clearly in the security audit record. This is the original data, this is what GPTfy masked before sending, and this is what came back from AI. But let's say that's not a rich enough summary. Let's go ahead and add more context by adding a related object. Let's add case, and also add case comments at the next level. Do field mapping for cases — account ID, case ID, case reason, description with specific masking patterns — and for case comments, add the body field and apply masking so we don't send anything to AI that AI doesn't need to have. Save that and activate again. Go back to the account, run this again, and now it comes back with anonymized account data plus information about the associated case. You can add more — opportunities, tasks, whatever enriches the use case. That's prompt creation and iterative refinement. Now let's go back to the cockpit. Select prompt catalog, then Boost Sales — which is what we set as the purpose. Scroll down until we see Demo Account Summary. That's the one we just created. Check it, click next. Assign it to the AE profile. If you had record types, you could associate those too. If you wanted more profiles, you could absolutely add them. Finish. Now when an account executive logs in with their profile, they'll find this prompt in the catalog under Boost Sales. From the cockpit, you can also see the ROI dashboard, which gives you statistics on how much time and how much money users are saving — broken out by prompt and by user. And the AI Insights dashboard shows usage by object, usage by profile, and how many seconds have been saved across your org. Once your account executives are actively using this, you'll see their savings grow and be able to demonstrate tangible AI program value.

Last updated: February 2026