Quickstart (Python + OpenAI)
Recommended: auto-log OpenAI calls with a copy/paste snippet.
1) Install
pip install sigmodapip install sigmodapip install sigmoda installs openai automatically (unless you install with --no-deps).
2) Set keys
Create a project in the dashboard and copy your SIGMODA_PROJECT_KEY. Set your OpenAI key as OPENAI_API_KEY.
export SIGMODA_PROJECT_KEY="…"
export OPENAI_API_KEY="…"
# optional for local testing so you actually see prompt/response text:
export SIGMODA_ENV="dev"export SIGMODA_PROJECT_KEY="…"
export OPENAI_API_KEY="…"
# optional for local testing so you actually see prompt/response text:
export SIGMODA_ENV="dev"Why is prompt/response empty?
SIGMODA_ENV=prod (default) means content capture is off unless you enable it. For local testing, set SIGMODA_ENV=dev.
3) Copy/paste (recommended)
Use Sigmoda's OpenAI wrapper with the Responses API to auto-log latency, tokens, and metadata—no manual timestamp/response plumbing required.
import sigmoda
sigmoda.init() # reads env vars
resp = sigmoda.openai.responses.create(
model="gpt-5.2", # or "gpt-5-mini"
input="Explain vector DBs like I'm 12",
sigmoda_metadata={"route": "quickstart"},
)
sigmoda.flush(timeout=2.0) # ensures it shows up for short scripts
print(resp.output_text)import sigmoda
sigmoda.init() # reads env vars
resp = sigmoda.openai.responses.create(
model="gpt-5.2", # or "gpt-5-mini"
input="Explain vector DBs like I'm 12",
sigmoda_metadata={"route": "quickstart"},
)
sigmoda.flush(timeout=2.0) # ensures it shows up for short scripts
print(resp.output_text)Optional: Chat Completions (still supported)
If you're still using the Chat Completions API, Sigmoda will log those too. We recommend Responses for new projects.
import sigmoda
sigmoda.init() # reads env vars
resp = sigmoda.openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain vector DBs like I'm 12"}],
sigmoda_metadata={"route": "quickstart"},
)
sigmoda.flush(timeout=2.0)
print(resp.choices[0].message.content)import sigmoda
sigmoda.init() # reads env vars
resp = sigmoda.openai.chat.completions.create(
model="gpt-4o-mini",
messages=[{"role": "user", "content": "Explain vector DBs like I'm 12"}],
sigmoda_metadata={"route": "quickstart"},
)
sigmoda.flush(timeout=2.0)
print(resp.choices[0].message.content)4) See it in Sigmoda
Open your project → Events and you should see a new event within a few seconds.
Advanced: Log any provider (manual events)
Use sigmoda.log_event(...) with any provider. If you want prompt/response stored, enable capture_content=True (see Privacy).
Advanced: Log any provider (manual events)
Use sigmoda.log_event(...) with any provider. If you want prompt/response stored, enable capture_content=True (see Privacy).
import sigmoda
sigmoda.init() # reads env vars
sigmoda.log_event(
provider="openai",
model="gpt-4o-mini",
type="chat_completion",
prompt="Explain vector DBs like I'm 12",
response="A vector database stores...",
duration_ms=1840.2,
status="ok",
metadata={"route": "support.reply"},
# timestamp defaults to now (UTC)
)
sigmoda.flush(timeout=2.0)import sigmoda
sigmoda.init() # reads env vars
sigmoda.log_event(
provider="openai",
model="gpt-4o-mini",
type="chat_completion",
prompt="Explain vector DBs like I'm 12",
response="A vector database stores...",
duration_ms=1840.2,
status="ok",
metadata={"route": "support.reply"},
# timestamp defaults to now (UTC)
)
sigmoda.flush(timeout=2.0)Privacy defaults
In prod, content capture is off by default. To store text, set capture_content=True (and consider redact=...).
Troubleshooting
- Not seeing events? Set
SIGMODA_DEBUG=1, run again, and checksigmoda.get_stats(). - Short script? Call
sigmoda.flush(timeout=2.0)before exit. - Still stuck? Confirm
SIGMODA_PROJECT_KEYand your network can reachhttps://api.sigmoda.com.