# Send text prompt action
This action generates completions for a given prompt using OpenAI's powerful language models. Simply provide a prompt and desired parameters, and the action will return one or multiple predicted completions. Use this action to autocomplete text, answer questions, and generate new content with ease.
Send Text Prompt Action
# Input
Field | Description |
---|---|
Model | Select the OpenAI model to which you plan to send the text prompt. |
Prompt | The prompt to generate completions for. If a prompt is not specified, the model will generate as if from the beginning of a new document. If you wish to create responses for multiple strings (or using tokens) please input the relevant information as a datapill. Further information of the format can be found here (opens new window). |
Maximum Tokens | The maximum number of tokens to generate in the completion. The token count of your prompt plus the value here cannot exceed the model's context length. Most models have a context length of 2048 tokens (except for the newest models, which support 4096). |
Suffix | The suffix that comes after the completion of inserted text. |
Top p | Enter a value between 0 and 1 for controlling the diversity of completions. A higher value will result in more varied responses. We recommend using this or temperature but not both. Learn more here (opens new window). |
Temperature | Enter a value between 0 and 2 to control the randomness of completions. Higher values will make the output more random, while lower values will make it more focused and deterministic. We recommend using this or top p but not both. Learn more here (opens new window) |
Number of completions | The number of completions to generate for the prompt. |
Log probabilities | Enter a number to obtain the log probabilities on the next n (determined by this value) set of likely tokens and the chosen token. Learn more here (opens new window). |
Stop phrase | A specific stop phrase that will end generation. For example, if you set the stop phrase to a period (.) the model will generate text until it reaches a period, and then it will stop. Use this to control the amount of text generated. |
Presence penalty | A number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
Frequency penalty | A number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood of repeating the same line verbatim. |
Best of | Controls how many results are actually generated before being sent over. Note that number of completions cannot be less than the value input here. |
Logit bias | Input a JSON containing the tokens and the change in logit for each of those specific tokens. For example, you can pass {"50256": -100} to prevent the <|endoftext|> token from being generated. Learn more here (opens new window). |
User | A unique identifier representing your end-user, which can help OpenAI to monitor and detect abuse. |
# Output
Last updated: 6/20/2023, 4:11:40 PM