OpenAI DevDay: GPTs and Custom Assistants

ai dev

On November 6, 2023, OpenAI held its first DevDay. The announcements reshaped what developers could build. Here’s what dropped.

GPT-4 Turbo

Bigger Context

GPT-4:       8K / 32K tokens
GPT-4 Turbo: 128K tokens

That’s roughly 300 pages of text in context.

Lower Cost

ModelInputOutput
GPT-4$0.03/1K$0.06/1K
GPT-4 Turbo$0.01/1K$0.03/1K

3x cheaper for input, 2x cheaper for output.

Updated Knowledge

Training cutoff moved to April 2023 (from September 2021).

Assistants API

A new abstraction for building AI assistants:

from openai import OpenAI

client = OpenAI()

# Create an assistant
assistant = client.beta.assistants.create(
    name="Data Analyst",
    instructions="You help analyze data and create visualizations.",
    tools=[{"type": "code_interpreter"}, {"type": "retrieval"}],
    model="gpt-4-turbo-preview"
)

# Create a thread
thread = client.beta.threads.create()

# Add a message
message = client.beta.threads.messages.create(
    thread_id=thread.id,
    role="user",
    content="Analyze the sales data I uploaded"
)

# Run the assistant
run = client.beta.threads.runs.create(
    thread_id=thread.id,
    assistant_id=assistant.id
)

# Wait for completion and get response

Built-in Tools

ToolWhat It Does
Code InterpreterRuns Python in sandbox
RetrievalSearches uploaded documents
Function CallingCalls your external functions

Thread Management

# Conversations persist across calls
# Upload files once, use across messages
# No manual context management needed

GPTs (Custom ChatGPT)

Build custom ChatGPT versions without code:

GPT Builder:
1. Describe what you want
2. Upload knowledge files
3. Configure capabilities
4. Share publicly or privately

Example GPTs:

Actions (API Integration)

{
  "openapi": "3.0.0",
  "info": {
    "title": "Weather API",
    "version": "1.0.0"
  },
  "servers": [
    {"url": "https://api.weather.example.com"}
  ],
  "paths": {
    "/current": {
      "get": {
        "operationId": "getCurrentWeather",
        "parameters": [
          {"name": "location", "in": "query", "required": true}
        ]
      }
    }
  }
}

GPTs can call your APIs.

Function Calling Improvements

Multiple Functions at Once

response = client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[{"role": "user", "content": "Book a flight and hotel to Paris"}],
    tools=[
        {"type": "function", "function": book_flight_schema},
        {"type": "function", "function": book_hotel_schema}
    ],
    tool_choice="auto"  # Can call multiple functions
)

# Response may include multiple tool calls
for tool_call in response.choices[0].message.tool_calls:
    if tool_call.function.name == "book_flight":
        flight = book_flight(**json.loads(tool_call.function.arguments))
    elif tool_call.function.name == "book_hotel":
        hotel = book_hotel(**json.loads(tool_call.function.arguments))

JSON Mode

Force valid JSON output:

response = client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[{"role": "user", "content": "List 3 colors"}],
    response_format={"type": "json_object"}
)

# Guaranteed valid JSON
data = json.loads(response.choices[0].message.content)

Vision

GPT-4 Turbo sees:

response = client.chat.completions.create(
    model="gpt-4-turbo-preview",
    messages=[
        {
            "role": "user",
            "content": [
                {"type": "text", "text": "What's in this image?"},
                {
                    "type": "image_url",
                    "image_url": {
                        "url": "data:image/jpeg;base64,..."
                    }
                }
            ]
        }
    ]
)

Use Cases

Text-to-Speech

response = client.audio.speech.create(
    model="tts-1",
    voice="alloy",  # alloy, echo, fable, onyx, nova, shimmer
    input="Hello, this is generated speech."
)

response.stream_to_file("output.mp3")

Voices Available

VoiceCharacter
alloyNeutral
echoMale
fableBritish
onyxDeep male
novaFemale
shimmerExpressive

DALL-E 3 API

response = client.images.generate(
    model="dall-e-3",
    prompt="A serene mountain lake at sunset, photorealistic",
    size="1024x1024",
    quality="hd",
    n=1
)

image_url = response.data[0].url

Better prompt following, higher quality than DALL-E 2.

What This Enables

Before DevDay

Build AI assistant:
1. Manage conversation history
2. Implement RAG from scratch
3. Handle file uploads
4. Build code execution sandbox
5. Manage context windows

After DevDay

Build AI assistant:
1. Create Assistant with tools
2. Done

Implications

For Developers

For Products

Competition

Concerns

Vendor Lock-in

Your app → OpenAI Assistants API → ???

If OpenAI changes terms, pricing, or capabilities,
you're dependent.

Complexity Hidden

"Just use the API" hides:
- Rate limits
- Cost management
- Edge cases
- Reliability

My Take

DevDay was about reducing friction. Building with LLMs got easier.

But the fundamentals haven’t changed—you still need good prompts, solid architecture, and understanding of limitations.

The tools are better. The hard problems remain.


November 2023: AI development got easier again.

All posts