Untitled conversation
NODE-RAD
Nigel Daley
20+
Conversations
Node-Red Flow Maker
To use one of these models via the OpenAI API, you’ll send a request containing the inputs and your API key, and receive a response containing the model’s output. Our latest models, gpt-4 and gpt-3.5-turbo, are accessed through the chat completions API endpoint. MODEL FAMILIES API ENDPOINT Newer models (2023–) gpt-4, gpt-4-turbo-preview, gpt-3.5-turbo https://api.openai.com/v1/chat/completions Updated legacy models (2023) gpt-3.5-turbo-instruct, babbage-002, davinci-002 https://api.openai.com/v1/completions You can experiment with various models in the chat playground. If you’re not sure which model to use, then use gpt-3.5-turbo or gpt-4-turbo-preview. Chat Completions API Chat models take a list of messages as input and return a model-generated message as output. Although the chat format is designed to make multi-turn conversations easy, it’s just as useful for single-turn tasks without any conversation. An example Chat Completions API call looks like the following: node.js node.js import OpenAI from "openai"; const openai = new OpenAI(); async function main() { const completion = await openai.chat.completions.create({ messages: [{"role": "system", "content": "You are a helpful assistant."}, {"role": "user", "content": "Who won the world series in 2020?"}, {"role": "assistant", "content": "The Los Angeles Dodgers won the World Series in 2020."}, {"role": "user", "content": "Where was it played?"}], model: "gpt-3.5-turbo", }); console.log(completion.choices[0]); } main(); To learn more, you can view the full API reference documentation for the Chat API. The main input is the messages parameter. Messages must be an array of message objects, where each object has a role (either "system", "user", or "assistant") and content. Conversations can be as short as one message or many back and forth turns. Typically, a conversation is formatted with a system message first, followed by alternating user and assistant messages. The system message helps set the behavior of the assistant. For example, you can modify the personality of the assistant or provide specific instructions about how it should behave throughout the conversation. However note that the system message is optional and the model’s behavior without a system message is likely to be similar to using a generic message such as "You are a helpful assistant." The user messages provide requests or comments for the assistant to respond to. Assistant messages store previous assistant responses, but can also be written by you to give examples of desired behavior. Including conversation history is important when user instructions refer to prior messages. In the example above, the user’s final question of "Where was it played?" only makes sense in the context of the prior messages about the World Series of 2020. Because the models have no memory of past requests, all relevant information must be supplied as part of the conversation history in each request. If a conversation cannot fit within the model’s token limit, it will need to be shortened in some way. To mimic the effect seen in ChatGPT where the text is returned iteratively, set the stream parameter to true. Chat Completions response format An example Chat Completions API response looks as follows: { "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "The 2020 World Series was played in Texas at Globe Life Field in Arlington.", "role": "assistant" }, "logprobs": null } ], "created": 1677664795, "id": "chatcmpl-7QyqpwdfhqwajicIEznoc6Q47XAyW", "model": "gpt-3.5-turbo-0613", "object": "chat.completion", "usage": { "completion_tokens": 17, "prompt_tokens": 57, "total_tokens": 74 } } The assistant’s reply can be extracted with: node.js node.js completion.choices[0].message.content Every response will include a finish_reason. The possible values for finish_reason are: stop: API returned complete message, or a message terminated by one of the stop sequences provided via the stop parameter length: Incomplete model output due to max_tokens parameter or token limit function_call: The model decided to call a function content_filter: Omitted content due to a flag from our content filters null: API response still in progress or incomplete Depending on input parameters, the model response may include different information. JSON mode New A common way to use Chat Completions is to instruct the model to always return a JSON object that makes sense for your use case, by specifying this in the system message. While this does work in some cases, occasionally the models may generate output that does not parse to valid JSON objects. To prevent these errors and improve model performance, when calling gpt-4-turbo-preview or gpt-3.5-turbo-0125, you can set response_format to { "type": "json_object" } to enable JSON mode. When JSON mode is enabled, the model is constrained to only generate strings that parse into valid JSON object. Important notes: When using JSON mode, always instruct the model to produce JSON via some message in the conversation, for example via your system message. If you don't include an explicit instruction to generate JSON, the model may generate an unending stream of whitespace and the request may run continually until it reaches the token limit. To help ensure you don't forget, the API will throw an error if the string "JSON" does not appear somewhere in the context. The JSON in the message the model returns may be partial (i.e. cut off) if finish_reason is length, which indicates the generation exceeded max_tokens or the conversation exceeded the token limit. To guard against this, check finish_reason before parsing the response. JSON mode will not guarantee the output matches any specific schema, only that it is valid and parses without errors. node.js node.js import OpenAI from "openai"; const openai = new OpenAI(); async function main() { const completion = await openai.chat.completions.create({ messages: [ { role: "system", content: "You are a helpful assistant designed to output JSON.", }, { role: "user", content: "Who won the world series in 2020?" }, ], model: "gpt-3.5-turbo-0125", response_format: { type: "json_object" }, }); console.log(completion.choices[0].message.content); } main(); In this example, the response includes a JSON object that looks something like the following: "content": "{\"winner\": \"Los Angeles Dodgers\"}"` Note that JSON mode is always enabled when the model is generating arguments as part of function calling.Introduction The Images API provides three methods for interacting with images: Creating images from scratch based on a text prompt (DALL·E 3 and DALL·E 2) Creating edited versions of images by having the model replace some areas of a pre-existing image, based on a new text prompt (DALL·E 2 only) Creating variations of an existing image (DALL·E 2 only) This guide covers the basics of using these three API endpoints with useful code samples. To try DALL·E 3, head to ChatGPT. To try DALL·E 2, check out the DALL·E preview app. Usage Generations The image generations endpoint allows you to create an original image given a text prompt. When using DALL·E 3, images can have a size of 1024x1024, 1024x1792 or 1792x1024 pixels. By default, images are generated at standard quality, but when using DALL·E 3 you can set quality: "hd" for enhanced detail. Square, standard quality images are the fastest to generate. You can request 1 image at a time with DALL·E 3 (request more by making parallel requests) or up to 10 images at a time using DALL·E 2 with the n parameter. Generate an image node.js node.js const response = await openai.createImage({ model: "dall-e-3", prompt: "a white siamese cat", n: 1, size: "1024x1024", }); image_url = response.data.data[0].url; What is new with DALL·E 3 Explore what is new with DALL·E 3 in the OpenAI Cookbook Prompting With the release of DALL·E 3, the model now takes in the default prompt provided and automatically re-write it for safety reasons, and to add more detail (more detailed prompts generally result in higher quality images). While it is not currently possible to disable this feature, you can use prompting to get outputs closer to your requested image by adding the following to your prompt: I NEED to test how the tool works with extremely simple prompts. DO NOT add any detail, just use it AS-IS:. The updated prompt is visible in the revised_prompt field of the data response object. Example DALL·E 3 generations PROMPT GENERATION A photograph of a white Siamese cat. Each image can be returned as either a URL or Base64 data, using the response_format parameter. URLs will expire after an hour. OK, PLEASE CREATE A FULL NODE FLOW THAT LIKE ACCEPTS POST REQUESTS To the gpt api, but for json mode.